Skip to main contentCal State San Bernardino
>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [New Bibliographic Items] >> newb0415 [Blog/News] || [Purpose] || [Notation] || [Copyright] || [Site Search] || [Bibliography Search]
Mon Apr 18 15:06:33 PDT 2005

Contents


    Pfleeger05

    1. Shari Lawrence Pfleeger
    2. Soup or Art: The role of evidential force in empirical software engineering
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp
    4. =IDEA EVIDENCE ARGUMENT TECHNOLOGY META-ANALYSIS
    5. Evidence of technology works: meaning of "works"? Kind of evidence? Who provides? Who reviewed it? What domain? Other domains? Social, economics, politics -- Risk.
    6. Types of evidence: tangible, testimonial, equivocal testimonial, missing, accepted facts.
    7. Multi-legged arguments are better: several diverse pieces of evidence have the same consequence.

    DybaKitchenhamJorgensen05

    1. Tom Dyba & Barbara A Kitchenham & Magne Jorgensen
    2. EVIDENCE-BASED Software Engineering for Practitioners
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp58-65
    4. =IDEA PRACTICES LITERATURE SURVEY EBSE
    5. How to select technologies, methods, processes to put into a project?
    6. EBSE::acronym="EVIDENCE-BASED Software Engineering", and following
      1. Convert problem/information into answerable question.
      2. Search the literature for best evidence.
      3. Critically appraise evidence: valid? Impact?, Applicable?
      4. Apply the evidence that fits current project: experience, values, circumstances.
      5. Evaluate performance and seek to improve.

    7. Resources page 61
    8. Checklist page 62.
    9. Example page 64: Chaos report does not fit with other surveys and does not include data to evaluate the accuracy of its evidence.

    BodolfKingBen-Menachem05

    1. David Bodolf & Patrick C K King & Mordechai Ben-Menachem
    2. Web Metadata Standards: Observations and Prescriptions
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp78-85
    4. =ESSAY WEB STANDARDS METADATA ONTOLOGIES AI
    5. Table 1 page 80: List of standards: ebXML, CPP, WSDL, UDDI, SOAP, WS-Security, P3P, DC, CIMI, OWL. Suggests
      1. Don't ignore testing, SQA and other long standing problems.
      2. Don't target a standard at too many purposes/uses.
      3. To find things may need meta-meta-data.
      4. Need tools to support search and navigation through classifications and thesaurus hierarchies.
      5. Don't add useless indexing.
      6. Don't work at too high a level and allow too much freedom at more concrete levels.
      7. Conventional ontologies should be limited to narrow domains until more reliable methods to develop ontologies are developed.

    HalpinEtal

    1. Terry Halpern and others
    2. Object Relation Modeling [ http://www.orm.net/ ]
    3. =SITE DATA CONCEPTUAL MODEL METHOD FACTS OBJECTS CONSTRAINTS Visio .NET ORM ERD RELATIONAL SCHEMA SQL

    Romanchik05

    1. Dan Romanchick
    2. Is the Rational Unified Process (RUP) Right for small teams
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp-
    4. =INTERVIEW PROCESS PEOPLE TEAMS RUM
    5. Common mistakes for small teams: putting process above people and preparing artifacts that aren't used in future stages of the project.
    6. Ref "Software Development for small Teams: a RUP-centric Approach by Gary Police & Liz Augustine & Chris Lowe & Jas Madhur, Addison Wesley

    VernerEvanco05

    1. June M Verner & William M Evanco
    2. IN-House software development: What project management practices lead to success
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp86-92
    4. =POLL PROJECT SUCCESS FINANCE COMPARISON
    5. 62% regarded as successful.
    6. Changing the project manager is correlated with failure.
    7. Manager's people skills and support correlated with success.
    8. Most (54%) projects didn't have significant interaction with analysts!
    9. Nearly half started with incomplete requirements. But completing them during the project was associated with success.
    10. Well defined requirements and scope is correlated with success, as is user involvement in setting requirements.
    11. Actively managing requirments change was correlated with success.
    12. Bigger projects tended to fail more often.
    13. Good estimation linked to success. Optimistic estimation with failure.
    14. Not having a schedule not associated with success or failure. Large projects tended to have a schedule.
    15. Estimates come top-down to the project manager before requirments are started.
    16. Better estimates tended to have project manager input.
    17. 8 used UML for requirements and 3 were successful.
    18. Not much risk management. Few post mortems. Both associated with success.

    BerryKamsties05

    1. Daniel M Berry & Erik Kamsties
    2. The syntactically dangerous ALL and Plural in Specifications
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp55-57
    4. =IDEA AMBIGUITY LOGIC LANGUAGE SPECIFICATION CS565
    5. Words to suspect: "only", "all", "also", "each".
    6. Use "each" when describing a property of the individual members of a set.
    7. Use "all" for shared properties across a set.
    8. Can use simple logic to clarify an ambiguity.
    9. All the lights in the room have a single on-off switch.
      Net
      1. Each light has its own switch.
      2. For all y:lights_in_room, one x: switch (x is on_off_switch_for y).
      3. All the lights share a common switch.
      4. For one x: switch, all y:lights_in_room (x is on_off_switch_for y).

      (End of Net)

    10. Similarly for plurals: "Students enroll in six courses" vs "Students enroll in hundreds of courses".

    PeachDorrKoehler05

    1. Barbara Peach & Jorg Dorr & Mathias Koehler
    2. Improving Requirments Engineering communication in multiproject environments
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp40-47
    4. =EXPERIENCE PEOPLE COMMUNICATIONS DOCUMENTATION DATA FLOW Nokia
    5. Use a workshop to understand problems in a software process and to design solutions.

    DagRegnellGervasiBrinkkemper05

    1. Johan Natt och Dag & Bjorn Regnell & Vincenzo Gervasi & Sjaak Brinkkemper
    2. A Linguistic-Engineering Approach to Large-scale Requirments Management
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp32-39
    4. =IDEA LANGUAGE SIMILARITY DOCUMENTATION RETRIEVAL REQUIREMENTS
    5. Map documents into vector space and use cos(angle) to measure matches when retrieving.
    6. Use log functions to scale the frequency of occurrence of words (formula is suspect)
    7. Applied to matching market requirements to business requirments.

    HaggeLappe05

    1. Lars Hagge & Katherine Lappe
    2. Sharing Requirements engineering experience using patterns
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp24-31
    4. =Advert Requirements engineering people patterns
    5. Organize the specification to parallel the structure of the project. Teams appoint one member as an author. Authors coordinated by the requirements engineer.
    6. Help stakeholders write specifications by giving them guidelines based on the prefered process by analysts,
    7. Record requirements on index cards and classify them.
    8. Link components to requirements by checklists.
    9. Fig6:UML model for pattern mining/database.

    Sommerville05

    1. Ian Sommerville
    2. Integrated Requirements engineering: A Tutorial
    3. IEEE Software Magazine V22n1(Jan /Feb 2005)pp16-23
    4. =Tutorial HISTORY Requirements engineering nonsequential agile Components COTS open-target

    RubiraEtal05

    1. C M F Rubira & R de Lemos & G R M Ferreira & F Castor Filho
    2. Exception handling in the development of dependable component-based systems
    3. Software - Practice & Experience V35n3(Mar 2005)pp195-236
    4. =CASESTUDY RELIABILITY COMPONENTS ARCHITECTURE USE CASES EXCEPTIONS CLASSES Catalysis
    5. Exceptional cases can not all be handled by a single component, one must map out the collaborations between several components for some of them.
    6. Showed exceptional events as actors in Use Cases: <<actor>> WaterLow, <<actor>>Alarm, ... !

    KrollKruchten05

    1. Per Kroll & Phillippe Kruchten
    2. The Rational Unified Process made easy: a practitioner's Guide to the RUP
    3. Addison-Wesley 2003 ISBN 0321166094 QA67.76 D47K75
    4. =HOWTO =ADVERT RUP PROCESS TOOLS TEMPLATES METHODS PEOPLE ONE-SIZE
    5. The Spirit of RUP::=following,
      • Attack major risks early and continuously or they will attack you.
      • Ensure you deliver value to your customer.
      • Stay focused on executable software.
      • Accommodate change early in the project.
      • Baseline an executable architecture early on.
      • Build your system with components.
      • Work together as one team (not annalists versus developers vs testers)(make architecture central).
      • Make quality a way of life, not an after thought.
    6. Gives examples of small,....large projects.

    DucasseLanza05

    1. Stephane Ducasse & Michele Lanza
    2. The Class Blueprint: visually supporting the understanding of classes
    3. IEEE Trans Software Engineering V31n1(Jan 2005)pp79-90
    4. =EXPERIMENTAL GRAPHIC Object-Oriented code metrics structure smalltalk
    5. Classifies attributes and methods into layers: initialization, interface, internal implementation, accessors, attributes.
    6. Attributes and methods shown as boxes. Metrics mapped to height and width. Type to color.
    7. Caller-callee link goes from bottom of caller box to top of called.
    8. Tested on real code. and by a dozen students (all found it helpful).

    CostagliolaEtal05

    1. Gennaro Costagliola & Filomena Ferruci & Genoveffa Tortara & Giuliana Vitiello
    2. Class Point: an approach for the size estimation of Object-Oriented systems
    3. IEEE Trans Software Engineering V31n1(Jan 2005)pp53-74
    4. =EXPERIMENT ESTIMATION CODE SIZE LoC EFFORT FP IFPUG TUCP METRICS NEM NSR
    5. CP1::level= "Class Point 1", based on NEM and NSR (Methods and services requested).
    6. level::= (low, average, high).
    7. CP2::level= "Class Point 2", based on NEM, NSR, and NOA(attributes).
    8. TUCP::= "Totally Unadjusted Class Point",based on CP1 and CP2.
    9. |-TUCP=Sum [class_type c, level l]( weight[c, l] * number classes[c, l]).
    10. class_type::= problem_domain | human_interaction | data_management | task_management.
    11. CP::= "Class Point", based on TUCP and 18 technical factors.
    12. p71. fig 6. Form
    13. Significant correlation between CP1 and CP2

    CortellessaEtal05

    1. Vittorio Cortellessa & Katerina Goseva-Postojanova & Kalaivani Appukkutty & Ajith R Guedin & Ahmed Hassan & Rania Alnaggar & Walid Abdelmoez & Hany H Ammar
    2. Model-Based Performance Analysis
    3. IEEE Trans Software Engineering V31n1(Jan 2005)pp3-20
    4. =DEMO REQUIREMENTS DESIGN QUALITY PERFORMANCE TIMING RISK PROBABILITY LOAD UML1.5 MODEL
    5. How to estimate the probability that a performance requirement will fail.

      For each scenario:

      1. assign demand vectors(CPU,Disk,network,...) to steps in sequence diagram; map to software execution model (=~= Activity diagram)
      2. Add hardware info to deployment diagram
      3. Devise workload parameters, map to execution model, contention analysis, est. Probability of violating timing
      4. Severity analysis
      5. Risk=severity * probability; identify high risk components.

    6. Estimate probability of failure at workload w, P(w) by linear interpolation from upper and lower bounds on throughput (l(w).. u(w)) and objective t. Assume l & u are monotonic increasing.
    7. If u(w0)=t then P(<=w0)= 0 and if l(w1) = t then P(>=w1) =1.
    8. For w: [w0..w1], P(w)= (u(w)-t)/(u(w)- l(w)).

    EvermannWand05

    1. Joerg Evermann & Yair Wand
    2. Toward formalizing domain modeling semantics in language syntax
    3. IEEE Trans Software Engineering V31n1(Jan 2005)pp21-37
    4. =IDEA REALITY DOMAIN Bunge ONTOLOGY LANGUAGES MODEL UML STATE CHART METAMODEL
    5. Map ontology of the real domain into constraints on modeling languages to minimize "impedance mismatch" between analysis and design.

End