PQRST: A Requirements Taxonomy



Richard J. Botting

Computer Science Department, California State University, San Bernardino

rbotting@csusb.edu



Abstract

The author presents a philosophical framework or taxonomy of requirements. He gives examples of how he uses it in producing software. He uses the mnemonic PQRST. This stands for Purposes + Qualities + Realities + Technical + Systems. Each is a type of constraint on the software. PQRST captures a twenty-year study of methods and tools. He relates it to a 10-year sample of the literature on software development. PQRST includes the requirements engineering (RE) part of all published methods from SP to XP. It illuminates methods. PQRST is simple and needs no special tools, methods, or processes. Instead it helps REs select tools, methods and processes. It may help RE researchers avoid some confusions as well.



Keywords

Requirements modeling, analysis, domain modeling, requirements elicitation, quality requirements, conceptual and cognitive modeling, modeling and architecture, evolution.





1. Introduction

In my humble experience, one should understand problems before solving them. In this paper I will be talking about the ideas that requirements engineers (REs) must understand to ensure that their software is successful. It is not a question of just the function, the data, the coding, the objects, the user, or the tools. Good software comes from combining many viewpoints [1]. Success comes from understanding the many forces acting on the developers [2]. Here, I present a way of thinking about requirements that I have found useful in research and practice.

Here a "requirement" is primarily some property of the situation where we expect the software to work. From these deriving the properties that the software must possess is easy. When documented this becomes a specification or SRS. So my focus here is on the user's domain and its problems not the structure of the software.

Twenty-two years ago I was a member of a team charged with adopting an analysis and design method for the United Kingdom Civil Service. We collected information on a dozen methods and interviewed the people selling them. One methodologist focused on what the user wanted, another on the data in the situation, another on the quality objectives, and others on the best way to structure the code. I invented a personal mnemonic that covered all their concerns:

OPQRST (1979)

Other

Purpose

Quality

Reality

Technology

System (as is)

It was a useful, stable, and enlightening way to think about software development. I have since used it when I work on software. During my research into the technology and methodology of software development (1982 onwards) I studied thousands of relevant papers, articles, books, and tools. I have published an annotated bibliography of more than 2,800 of the most cogent items [3]. I used my taxonomy to classify the items and never found anything on requirements that did not fit PQRST. So, I dropped the O. The only other change has been to use plurals in the list. I have discovered that there is no single purpose, quality, etc. in most projects.



2. Applicability

Because it is extremely simple yet comprehensive, PQRST applies to all kinds of projects. I have used it in standalone programs and WWW systems. It is not a restrictive or normative model of how to develop software. It forms a memorable grouping of the types of things that are good to think about when developing software.

Keeping the mnemonic PQRST in mind, reminds a requirements engineer to cover all the forces. It helps detect project killing risks. One should use the mnemonic to uncover unsuspected constraints. Classifying them correctly is less important. Thus, when developing a world-wide-web application it is vital to be aware of the need to make it impossible to stop users gaining control of the server (a user anti-story) than to classify this requirement as a Purpose or a Quality.

Some simple experiments in the late 1980's convinced me that PQRST does not form the basis of an effective method or tool. Prototypes slowed me. Instead it operates best as a mnemonic. As a mnemonic it does not assume any particular process or life cycle. It applies to big bang projects and to Extreme Programming (XP)[4]. It helps REs gather the facts that help them choose effective tools, teams, processes, and methods for a particular project or an increment in a project.



3. PQRST 2001

PQRST is a mnemonic for five types of requirements (viewpoints, forces, attributes, factors, . . .) that combine to constrain the software.

PQRST spidergram

Purposes

Qualities

Realities

Technical matters:

Tools, Technologies, Teams,...

Systems: as is, could be, and as planned.

Each factor gives a different view of the software. The different types of requirements behave differently. They interact in complex ways. None are enough, by themselves, to handle all projects. In combination, they help an requirements engineer specify the software plus a process, method, tools, etc. that make the result fit into its environment. PQRST lets a developer predict more precisely how long a small change in requirements will take to implement. Sometimes, it signals that the project should not proceed. For example when I applied PQRST to the famous Lift case study, I discovered that 1980's software was not an appropriate technology to use.

I will describe each of the five factors in turn and then explore how they fit together. I will give examples of how I have used them, how they appear in the literature, and some first steps toward a mathematical model of requirements.





4. P is for Purposes

In the 1980's we learned to look at the verbs in a mission statement to find out what the user wanted. These days we document use cases or user stories. Each is a Purpose. They may be developed into more detailed scenarios. Each Purpose may be linked to other ones. For example, one purpose may require another to be satisfied. At various times Purposes have been called goals, functions, functional requirements, and responsibilities.

As an example, some years ago, I needed to develop an editor for my students to use in a programming class. I used this statement of Purpose: To help a new student of computer science write a short program using existing terminals. I talked to a sample of these students in class and found out what they would find easiest to do. This lead to a precise idea of how the program would interact with the students. All the students learned to use it in 30 minutes. My secretary and I used it daily until we got workstations [5].

If Purposes are forgotten, the user may not have any use for the software. The term feature for example, shifts attention from what a specific user wants to what business is thinks it can sell. Bloatware comes from ignoring the users and their purposes [5].

Too strong a belief in a single Purpose for each product used to lead architects and methodologists astray. Functional methodologists used to quote that "form ever follows function." Christopher Strachey's brilliant article in the Scientific American in 1966 lead to functional decomposition methods. But there is more to software than "function" [7]. The function alone does not determine a unique design. Sorting data is an example of a single function with many different designs that fit different situations best. We must admit that (1) any piece of software has many purposes, (2) these purposes are delegated or assigned to parts of the software, (3) only some purposes turn into problems that need solving, (4) some purposes cannot be achieved, and (5) some are too expensive to implement or run.

By developing many simple Purposes we get (1) better estimation, (2) the ability to set priorities, and (3) a way to schedule incremental deliveries. Here are two examples: The "Function" part of Quality Function Deployment (QFD)[8] uncovers and prioritizes the user's purposes. They are correlated with the software requirements. The correlation matrix lets one relate the user's priorities to the technical requirements for the software. The XP "Planning Game"[4] is a ritual to prioritize and schedule incremental changes. An onsite user writes user-stories on 3><5 cards. The programmer estimates how many days each will take. The user then selects cards that will take one week. The programmer implements these. At the end of the week one can count how many of the planned user stories have been completed. This gives a better way to estimate the time to implement other user stories in the next iteration.

Purposes should be user oriented not technical. They should refer to observable behavior of the system, not the internal working. Good design hides the technical problems from the user! There are many ways to record purposes. Simple 3 by 5 cards have proved themselves on simple projects[4]. My primary expression of a purpose would be something like this:

"To help <user >

achieve <something>

by <providing a service>."

The simple form should be expanded to a detailed story or scenario that explains how the user gets what they want. It helps to be very specific. For example, Cooper gives a name and photograph to each "persona"[6]. There are many ways of documenting scenarios including: lists, narrative text, audio-video recordings, prototypes, story boards, sequence diagrams, grammars, state diagrams, and timing diagrams. Each step in a scenario should become a new purpose. This is a modern form of functional decomposition.

If the software is already divided into parts (programs, processes, entities, modules, components, objects, etc.) Purposes can be assigned to any part of a piece of software. An object's purposes are called its responsibilities. A responsibility has the form

"To help <objects>

achieve <another purpose>

by <providing a service>

and using <other objects>"

This is the heart of the Object-Oriented CRC (Class-Responsibility-Collaborators) technique [9]. These may be refined into a set of formal contracts between each object and its clients -- See Bertrand Meyer's work

Any program, component, class, object, or function/method should have some purpose. Otherwise: "You Ain't Gonna Need it" (YAGNI)[4]. Cooper has a similar theory: programmers should be stopped from wasting time on complicated scenarios that have no value to the real users [6]. As an example of YAGNI, when I developed an editor (FRED) for my students, I deliberately did not allow it to edit large files. It just said it could not help. I wrote a backup program (ETHEL) for large files. But nobody asked for it. I included a feature that no user needed. FRED would congratulate the user when they started it for the 100th time. It seemed like a good idea at the time. My users thought the program was being sarcastic. I removed it. The students were happier afterwards.

A theoretical model of a single Purpose is as the set of all systems that achieve that purpose. A successful system would then fit in the intersection of all the separate purposes. A more general model is that the Purposes form some kind of Boolean algebra or lattice. This would accommodate fuzzy sets and softgoals[11].

Apparently some purposes are negative. They are met by making some actions impossible: "To help the FBI achieve security by not providing access to unauthorized users," "To help Transylvanian Railways achieve safety by not permitting two trains to collide at the crossing," or "To make sure that the word processor never freezes up leaving the users work unsaved." These user anti-stories or anti-scenarios [34] are best treated as Qualities.



5. Q is for Qualities

Tom Gilb spent much of his career promoting a quality-driven design method called "Design by Objectives." The Analysis of Algorithms promulgated by Donald Knuth studies of one Quality: execution time. There is much research on another Quality: reliability. In mass marketed software time-to-market is critical[10]. Clearly "one size does not fit all." Each product has its own set of qualities. They can appear as adverbs in informal specifications, descriptions of purposes, and in mission statements. Some people embed Qualities in use case descriptions. These "ilities" suggest how the software should perform -- quickly, reliably, easily, securely. Some call them "non-functional requirements," softgoals [11], "attributes" (Tom Gilb), and "nonbehavioral requirements" [12].

It helps to convert the "ilities" into things that can be measured. Run time qualities need to be analyzed until they can be measured objectively during acceptance testing. Total Quality Management(TQM) has become an important slogan in the battle for market share. The "Quality" part of QFD uses a matrix to uncover and prioritize the user's quality needs and correlate them with metrics and testable properties of the product [8].

It has proved difficult to develop one scientifically valid measurement of software quality. Cost, mean-time-to-failure, time to produce, readability, time-to-market, maintainability, re-usability, etc. are important qualities that the producers worry about. They effect the user indirectly. Anomalous negative purposes: security, safety, reliability etc., are best handled as qualities as well. All qualities have to be analyzed, traded-off, predicted, measured, tested, and monitored.

There is a serious risk of using simplistic "Bogus" metrics to manage the software process[13]. Measurement theory leads one to look for many different measurements that help answer questions about the software and achieve project goals [14]. The critical success factors are different and lead to different plans, staffing, etc.

However, all critical qualities in a project need some form of quality control: A mass-marketed program may rely on user feedback and daily builds to achieve good time-to-market[10]. But a bespoke "fly-by-wire" project will need to have all documentation and code inspected during development. Predicting important qualities must be part of a Software Quality Assurance (SQA) program. Qualities define what "better" and "best" means [15].

An optimized system does one thing well but does it by sacrificing other qualities. Mathematically, there can normally only be one quality that can be optimized at a time - the rest are commonly forced to their worst "good enough" values [16]. So, it helps to know, for each project, and for each part of the project, the quality that has to be the best possible vs. those that just have to be "good enough." Sometimes competing qualities have to be "traded off" so that all are "good enough" but none are optimal. This happens when optimizing any one quality forces other qualities out of their "good enough" range. In practice, sometimes, two qualities can be both improved by a single technique. For example code reuse enabled Matra Cap to improve time-to-release and their fault rate [17].

As with other branches of engineering, the qualities found in a piece of software depend on the technology, tools, and techniques used to design and construct it. So the techniques and technology should be chosen to give desirable Qualities. When engineers let the technical considerations drive requirements, the user suffers [6, 8, 26].

A first incomplete theory of qualities imagines a set of measurements. Each may be ordinal, interval or ratio. Additionally, a map from the set of possible systems and purposes to these measurements( q : S><P -> Q). However, the Technical factors (below) also have Qualities.



6. R is for Reality

Adjectives and nouns in a mission statement refer to the reality with which the software is concerned. Use cases and user stories (Purposes) often refer to things in the "real world" or at least in the user's head. Clients have terms that need defining. Situations have facts to discover. Requirements engineers need to explore conjectures and theories about the client's world. These facts, definitions, designations, conjectures, questions, answers, assumptions, reasons, and consequences aid in the discovery and validation of solutions. It can pay to record them. Some use formal logic to do this. These R-factors determine the concepts modeled in the software architecture. "Domain Analysis" studies the "Reality" across a set of similar projects.

In one way the word Reality is misleading. It suggests that there is something "out there" that is being modeled. It is important to be aware that the Reality is constructed by the people in the system[1]. In other words my term Realities includes much that is imagined. One needs to understand people's imagination as well as their actions and needs. The R view of the problem is the abstract metaphor underlying a useful GUI [18]. A metaphor is sometimes part of XP[4]. Mays argues for more support for the conceptual modeling that a developer is forced to do with graphics [19].

Data should encode the state of some reality. EBNF, Data Dictionaries, Jackson Structure Diagrams, and relational schema define data that records the state of the environment. So, these assert views of reality. Entities, relationships, and attributes (ERA ) give a static picture of Reality [20]. An Object-Oriented class model often combines a static view with a model of the behavior of objects inside the software. Data base systems and Object-Oriented technology make it possible to implement software with an architecture that mirrors reality.

A software engineer must allow for all the sequences of events that a piece of software are possible. The set of possible patterns of events form the dynamic picture of Reality. We can classify sequence of events further so that significant ones can be recognized and dealt with. Dynamics is expressed as entity life histories[23], Statecharts[34], state transition diagrams (STDs), Petri Nets, regular expressions [20], etc. Any of these are a basis for Dynamic Analysis and Design [21]. Jackson's earlier work[22] and SSADM [23] showed that many "update functions" in traditional systems can be distributed and encapsulated in the patterns of behavior of simple "Realistic" entities.

Some object oriented pundits promote the modeling of the behavior of real world objects. Behavioral hierarchies are more complex than the static and dynamic models described above. Users find this view to be counter-intuitive. As an example an Object-Oriented light bulb would screw itself into a socket and turn itself on. Developing an inheritance hierarchy is also difficult. Classes, inheritance, encapsulation, and polymorphism are important Technical tools. They may be less effective at describing Reality. Many methodologists prefer to assign responsibilities (Ps) to objects in a static model of reality (R) [24].

The R factors constrain the possible systems to those that implement a particular view of reality. Thus, they are of the same type as the Purposes: R defines a subset of the possible systems. They would form elements in the same algebra as the P factors.



7. S is for Systems

Programmers prefer to start from scratch but the current System (data+hardware+software+people) is an important source of information. S stands for what we are trying to fix. Henry Petroski has documented case after case where the design of an every-day object (bridges, cutlery, sticky tape, etc. ) evolved by the removal of perceived faults in current solutions [25]. every-day software has evolved like this.

Collins and Bricknell[26] document many expensive computerization failures where software was produced without first simplifying and re-engineering the system in which it is placed. Too much reliance on the existing systems leads to "paralysis by analysis" however. The system is not the reality! S is not R: A bank deposit form is part of a system (S). It stands for the movement of real money(R). A button on a screen (S) is a way to trigger some useful process(P), it is not a use case.

Maintenance makes up a large chunk of software work. Computer scientists ignored it until "software re-engineering" became a fashionable topic to study in the 1990's. Clearly re-engineering must start with a thorough assessment of the current system. Re-engineering aims to replace old code by something that is easier to maintain. Maintenance starts from the current situation (S) and aims to make the minimum change to fix some symptom or add a new purpose/quality/reality. Nowadays this is may be expanded to recoding, re-factoring, or redesigning parts of the system [27].

The System may contain legacy code. Components, tools, and ideas in it can be reused. Legacy code can be wrapped in modern Technology rather then being redeveloped. The code is often the only detailed record of what the software does and how the data is organized. However it does not explain why it is the way that it is. Reverse engineering attempts to reconstruct this missing information. Techniques invented for analyzing manual systems ( DFDs, normalization, etc.) apply to code. There has been talk of "round-trip" engineering where tools can automatically map specifications into code and reverse engineer code to specifications.

Software reuse depends on what exists. Reuse has been a hot research topic. Wise engineers have always used of current resources to reduce the cost of a change. Meanwhile people report figures that indicate that reused parts costs 2/3rds less and that 60% to 80% of code can be reused [28]. Searching for reusable pieces of legacy code is expensive even with special tools. Reusable parts may need to be specially designed using information hiding and separation of concerns (e.g., physical vs. logical, logic vs. database, etc.) can create parts that are worth reusing [17]. So, matching existing parts to new needs depends on documenting the Ps, Qs, and Rs of the parts.

Business engineering starts with the existing workflow(S). Similarly process improvement (the re-engineering of software engineering) must be based on a repeatable and measured current process. The flow of data and work in a system delimits parts that need to be (a) replaced, (b) protected, or (c) re-used. Successfully integrating software into an existing system requires that the work flow be understood.

Studying the current system/situation needs care and intelligence. One can look at: documentation, manuals, comments in code, plans, blueprints, analysis and design products, memos, 3><5 cards, specifications, designs, . . . these may all be reused, rejected, and/or learnt from.

The people in the system need to be involved. Users, clients, and domain experts are important resources that a software engineer needs to tap via interviews, brainstorming sessions, onsite users, etc. [29]. On the other hand, the requirements engineer needs to make sure that the users do not produce too many requests. Some form of change control process, incremental delivery, or XP Planning Game[4] is needed.





8. T is for Tools, Techniques, Technology, and Teams

Computer people tend to overestimate the value of technology. So, much of the published work on software development is about T factors. These include available resources. They define what the software development team can do and how easily the can do it. All engineering project depends on the tools and materials available. In software materials and tools are different: people, languages, libraries, platforms, methods, diagrams, patterns, work places, standards, rules, processes. . . . The designer needs to know what are mandated, encouraged, permitted, forbidden, and/or affordable. They determine the set of possible solutions and achievable qualities. So the T-factors must be studied, analyzed, and documented.

Selecting an architecture and architectural style constrains the possible designs for a piece of software and so architecture is a T. Similarly if a project uses a particular pattern language then it constrains the possible designs. Patterns are themselves Ts. Teamwork and process are important T factors. Teamwork is probably most critical in determining success [30].

A theoretical model of the T factors would be as transitions between possible systems. These maps are not functions. The changes are nondeterministic. Some can also fail, so they would be modeled as one-to-many partial mappings on the space of possible systems. Associated with each T is a set of qualities.



9. How PQRST factors interact

PQRST separates the concerns between what is needed (PQR), what is (S), and what can be done(T). S factors are important because they lead to PQR factors. Purposes are discovered in an existing System in many ways - looking at documentation, structured interviewing, observing, guesswork, systems analysis, workflow and activity analysis, QFD, interacting with an on-site user representative, etc. . Indeed some purposes are not discovered until a product is actually in use.

The R model is a theory and must be tested versus observations in S. The Reality model needs "reality therapy"[31].

Safety, Security and Reliability (Qs) concerns are hidden in the current system(S). It may be obsolete but its good qualities must be preserved. It is rare that only one part of a system provides a Q. For example: all parts must be secure to give a secure whole. All parts must be safe for the whole to be safe. Sometimes, a quality emerges from the interaction of the parts rather than being found inside them. So, qualities have to be recovered from an existing system with care.

Associated with each T is another set of Qs -- for example, programming alone tends to be error prone but fast, whereas pair programming[4] tends to produce better code but takes more people. The Qs represent the costs and benefits of using any particular T factor. For example, using Beta testing (a T) may lead to a low reliability system(a Q) delivered quickly (a Q). But SQA (a T) can delay the time to market (Q) but improve the reliability(Q) of the result[10]. Choosing T's is a question of tradeoffs. Qs are functions attached to Ps. Each maps R >< T to a measurable value.

Performance engineering, the analysis of algorithms, and data base design show that the physical structure of software and data (Ts) should be chosen to give desirable qualities (Qs). For example, much of the theory of Object-Oriented programming is about Techniques for improving the Qualities of reusability and maintainability. Languages and environments are T-factors that deliver different Qs: Visual Basic aids rapid development but C++ aids rapid execution. Perl is portable, but the Bourne shell (in my experience) is more secure.

Optimal designs are harder to understand, maintain, and produce than software that results from optimizing a working prototype or proved design. It is not always possible to optimize all qualities with one design [16]. There is no algorithm to calculate Ts from PQR because design involves (1) tradeoffs, compromise, and creativity [25,33] and (2) intractable or incomputable problems.

Data in a system (S) are easier for clients and users to understand than algorithms or objects. Samples of current data (e.g., forms, screens, data structures, ...) show what data is important. However, the abstract concept (R) has to be abstracted from the current implementation. Formalization and normalizing do this. It is usually better, however, to make a simple manual prototype and test it before normalizing. Data dictionaries and EBNF define the form of data in the current (S) or proposed (S') systems.

Statistics about the current system (S) provide information needed for predicting performance [23], reliability, and other Qualities of the new system (S'). Statistical data about experimental systems can make released systems more usable. Statistics about existing systems (S) can validate multiplicities in static R models via the Pigeon Hole Principle[23].

A library of modules cannot be reused effectively unless it is possible to find modules that fit the problem. Current source code libraries and repositories are strong on the P and R factors but often ignore the Q factors. The T factors are often clearly documented but - by Parnas - should be hidden. For example the standard UNIX sort function is documented as an implementation of the QuickSort algorithm (a T ). The C++ standard does an unusually good job of specifying the PQRs of functions in the standard library (STL) and leaving the Ts to others.



10. Synopses

The engineer develops requirements (PQRT) from what exists(S) and searches the techniques(in T) to develop a system (S') to satisfy the PQR. The research literature has always stressed decoupling the PQR components from the ST factors. One still needs to trace the PQR through the Ts to parts of S'.

PQRST reminds an engineer to look at the current software(ST) to see what can be reused(S -> S'), re-engineered (changing the T in ST (ST -> ST' )), or reverse engineered(ST -> S -> PQR).

When T-factors are not traced back to Q-factors, we risk producing infeasible designs. The current system(S) implements certain Purposes(P) that refer to certain Realities(R) under Qualities(Q) by using Technology (T). When T-factors are not analyzed out of S, they lead to inadequate solutions [32].

I could summarize the results of my requirements engineering research as a simple equation:

S = T( P, Q, R).





11. Dynamics of PQRST

A change in the Ps that leaves the QRST factors unchanged is nearly always easy to implement as a new piece of code. Sometimes the code need refactoring because of duplication [4].

A change in the Qs with the rest of the factors unchanged tends to be much more expensive. For example, trying to add security, reliability, or safety to a system that does not have them often means that every part of the software has to be reworked.

The size of the changes in Reality directly maps to the size of the change in the architecture of the software[22]. Small changes in R are easily met. Large ones take time and care, and may lead to large scale changes.

Changing T factors while leaving the PQRs the same has lead to expensive disasters. This should be avoided, whenever possible. Start by changing PQR and simplifying S, then think of changing the Ts[26].

Finally there is the question of changing S factors. The very act of writing a new piece of software perturbs the System in surprising ways. I believe that PQRST may reduce the surprises. Some will remain. These unexpected effects of planned changes can trigger new Purposes, Qualities, or Realities to emerge. Software work is therefore self-perpetuating.



12. Acknowledgment

The PQRST mnemonic is based on and summarizes the work of many experts: more than 2,800 publications [3]. 2% of these are about Purposes, 6% on Qualitites, 4% on Realities, 12% on Technicalities, and 3% on Systemic factors. PQRST is a high-level overview of the software development literature. If one stands on the shoulders of giants, one tends to get a high-level view.



Bibliography[1] C. Floyd, H. Zullighoven, R. Budde and R. Kiel-Sawic (Eds), Software Development and Reality Construction, Springer Verlag New York NY 1992

[2] G. Booch, The Illusion of Simplicity, Software Development Magazine, CMP.com (Jan 2001)pp57-59

[3] R. J. Botting, Bibliography of Software Development, http://www.csci.csusb.edu/cgi-bin/dick/lookup

[4] K. Beck, Embracing Change with Extreme Programming, IEEE Computer Magazine (Oct 1999)pp70-77

[5] R. J. Botting, Fred - a Friendly Editor, Proc. Western Educational Computing Conference (Nov. 1984) Western Periodicals Co., N Hollywood, CA.

[6] A. Cooper, The Inmates are Running the Asylum, Sams, New York NY 1999

[7] R. J. Botting, A Critique of Pure Functionalism, pp148-153 Proc. Fourth Int'l Workshop on Software Specification & Design(IWSSD4) April 1987 IEEE, LA CA

[8] S. Haag, M. K. Raja, & L, L. Schkade, Quality Function Deployment usage in Software development, Comm ACM (Jan 1996)pp41-49

[9] N. Wilkinson, Using CRC Cards: an Informal approach to Object-Oriented development, SIGS Publications Inc. New York NY 1995

[10] R. J. Botting, On the Economics of Mass-Marketed Software, Proceedings of the 1997 International Conference on Software Engineering 1997, ACM Order #592970 IEEE CS Order #0270-5257, pages 465-470

[11] J. Mylopoulos, L. Chung, & B. Nixon, Representing and Using Nonfunctional Requirements: A Process-Oriented Approach, IEEE Trans. SE (Jun. 1992)pp483-497

[12] A. M. Davis, Software Requirements: Analysis and Specification, Prentice Hall, Englewood Cliffs NJ, 1990

[13] T. Bolinger, What can Happen when Metrics Make a Call, IEEE Software Magazine (Jan 1995)pp15

[14] N. Fenton, S. L. Pfleeger, & R. L. Glass, Science and Substance: A Challenge to Software Engineers, IEEE Software magazine (Jul. 1994)pp86-95

[15] N. Fenton, Software Measurement: A Necessary Scientific Basis, IEEE Trans. SE (Mar 1994)pp199-206

[16] H. Simon, The Science of the Artificial, MIT Press, Cambridge MA 1969 + 3rd edition 1996

[17] E. Henry & B. Faller, Large-scale Industrial Reuse to Reduce Cost and Cycle Time, IEEE Software Magazine (Sep. 1995)pp47-53

[18] J. Lovgren, How to choose good metaphors, IEEE Software magazine (Jan 1994)pp86-88

[19] R. G. Mays, Forging a silver bullet from the essence of software, IBM Systems Jnl (1994)pp20-45

[20]M. Snoeck & G. Dedene, Existence Dependency: The Key to Semantic Integrity Between Structural and Behavioral Aspects of Object Types, IEEE Trans. SE (Apr. 1998)pp233-351

[21] R. J. Botting, Into the Fourth Dimension: An Introduction to Dynamic Analysis & Design, ACM SIGSOFT Software Engineering Notes ( Apr. 1986 ) pp36-48

[24] C. Larman, Applying UML and patterns: an introduction to Object-Oriented analysis and design, Prentice Hall PTR, 1998. Upper Saddle River, N.J.

[22] M. A. Jackson, Principles of Program Design, Academic Press, New York, NY, 1975

[23] K. Robinson & G. Berrisford, Object-Oriented SSADM, Prentice Hall Inc., Upper Saddle River NJ, 1994

[25] H. Petroski, To Engineer is Human: The Role of Failure in Successful Design, St. Martin's Press, New York NY, 1985

[26] T. Collins & D. Bicknell, Crash: learning from the world's worst computer disasters, Simon and Schuster, London UK, 1998

[27] R. S. Arnold, Software Re-engineering: A Quick History, Comm. ACM (May 1994)pp13-14

[28] D. Gotterbarn, SEL 'Experience Factory' explored at software engineering workshop (Goddard Space Flight December 2nd-3rd 1992), IEEE Computer Magazine (Feb. 1993)pp117-118

[29] M. J. Muller & S. Kuhn (Guest Editors), Special Issue on Participatory Design, Comm. ACM (Jun 1993)pp24-103

[30] A. Cockburn, Characterizing People as Non-Linear, First-Order Components in Software Development, Proc. SCI/ISAS Florida, 2000 pp728-736

[31] H. H. Bauer, Scientific Literacy and the Myth of the Scientific Method, U of Illinois Press Urbana Ill, 1992

[32] L. Constantine, Essentially Speaking, Software Development magazine CMP.com (Nov. 1994)pp96+95

[33] S. Dasgupta, Design Theory and Computer Science: Processes and Methodology of Computer Systems Design, Cambridge U Press, New York NY, 1991

[34] D. Harel, From Play-in Scenarios to Code: Achievable Dream, IEEE Computer Magazine, Jan 2001, pp53-60