Skip to main contentCal State San Bernardino
>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [Monograph] >> 00b
[Index] || [Contents] || [Source] || [ Search monograph] || [Notation] || [Copyright] Tue May 10 12:48:47 PDT 2005

Contents


    Why We Need To Analyze Software Development

      1. Introduction

      Software development is like an arrow. First, it takes skill to hit the target. Second: Skill is no help unless the object being used as an arrow is designed for the job. A good arrow has a sharp end(implementation) and a soft end(user needs) plus a strong connection between them.

      According to some [Keyes92] "The software development process is broken". Others disagree(see below). In this chapter I explain why we need to study current ways of producing software before trying to fix them.

      I show that what is being done and what is taught is not engineering: Engineering has a wider scope and uses formal and graphic methods.

      The main argument in this chapter is as follows:

      1. Current software is not always good enough.
      2. New technology won't improve it much.
      3. Therefore the way we develop software must be improved.
      4. Improvements should always be based on what is current.
      5. Improving software development should be based on studying it "As-Is".

      I also show that it will benefit all computer professionals if we separate the engineering of software from the science of computing. We can then start to reconstruct software development using computer science as a basis and practical experience as a guide.

      2. The State of Computing

        2.1 "We Can Do Better!"

        An engineer in a provider of telecommunications software(EBS) shows that their "time to market" is 5 to 9 times better that of the US average. Further he shows that the best profit is made when a new product is delivered quickly. This shows that one can produce software quicker by changing the way things are done. [Olsen94]

        In January 1990 and Summer 1991 programming errors disrupted millions of telephone calls. The software was produced by a different telecommunications company to Olsen's but one with a similar "quickly to market" philosophy. In April 1993 John Bischoff, the vice president of this company described how this failure forced them to make fewer mistakes. By using techniques mentioned later and by changing the company culture from "quickly to market" to "quality first" their backlog of problems and their problem report rate were reduced by two- thirds. [ Footnote 1 ] This shows that one can also change software production to emphasize reliability rather than time-to-market.

        You don't need a disaster to attempt to do software differently! But first you must admit that you have a problem:

          " 'Problems? We don't have problems here. We don't need principles or process or tools. All we need is for you to find a way to make your people work harder and with more devotion to the company.' " [Royce92]
        Royce's problems - a project behind schedule, mired in bugs, and always 90% complete - are typical. [Racko95c] [Racko95d] Here is another kind of problem that may ring a few bells with experienced developers of software packages for the mass market:
          "Neither the customers nor [the] developers could see the real problem[...]we directed all efforts on customer requested twiddles[...] the situation was like swatting gnats in the dark: I hoped that if I could kill the right one the light would come on.[...]Management however remained convinced it had customer's problems well in hand. We had stacks and stacks of "real customer data" in the form of bug reports and enhancement requests. We were, after all, responding to hundreds of their requests with every update and release.

          Fate proved otherwise. [...]development was canceled due to customer rejection." [AllenCD95] (p83)

        The young Tony Hoare, 25 years ago, had a similar problem: His operating system was behind schedule and always 90% complete. [Hoare81] However his bosses: (1) told him (loudly) that he had problems, (2) made him accountable for solving them, and (3) left him room to fix it. He and his team analyzed their situation and found causes. They developed a process that could be controlled and improved [Hoare81] , compare [HenryBlasewitz92] The operating system ended up being a quiet success with its users. This is a second example of turn-round.

        A third example of organizational turn-round is the shift in focus from technical skills to real customer needs inside WordPerfect Corporation in the early 1990's. [AllenCD95] This example is interesting because the company was not in crisis, and the change was started from below:

          "How do you change a successful organization? How do you improve development practice that developers, especially management, think are fine? How do you introduce change into an organization that not only thinks it is successful, but also believes things could not be better?" [AllenCD95] [ Footnote 2 ]

        A fourth example of getting a software process under control is the NASA/IBM Shuttle Software Project 1970-1990. [MaddenRhone84] [SpectorGifford84] [Billingsetal94] This used IBM's Cleanroom method. It has been used, 1987- 1993, in seventeen industrial size projects with documented savings. [Hausleretal94] Once under control the process was improved. The Software Engineering Institute's Capability Maturity Model codifies this idea[See MATURITY in my bibliography].

        In all cases so far described the organization had to (1) recognize a symptom of a problem and then (2) diagnose it, (3) work out a plan to solve the problem. Diagnosing problems and making plans is a part of systems analysis.

        Therefore we need to become more skillful at analyzing software processes. This monograph analyzes a large number of practical and theoretical processes in the hope of (1) developing some idea of the problems to expect given a particular kind of process, and (2) to develop a set of sub-processes that can be used to fix diagnosed problems.

        Even when all is going well it can be standard operating procedure to continually review the way things are done in an organization. This can be via a complex process like the above IBM Cleanroom approach, by Basili's Experience Factory [Basili95] , or something as simple as preparing a postmortem analysis (also known as a post-delivery review or a post implementation audit) of all completed projects. [BradyDeMarco94] [Glass95] By extracting the "Best of Practice" later projects can be improved. [Glass94] [Glass95] Listing the things that were done wrong can make a postmortem document both well read and useful. [BradyDeMarco94] This fits nicely with Hoare's experiences(above). It also fits the final conclusion by Akira K Onoma & Tsuneo Yamaura based on work at Hitachi: "We learn some things from success, but a lot more from failure"(page 76 of [OnomaYamamura95] ). Hitachi use a traditional sequence of analysis, design, code, and test. By applying industrial quality control models to software testing and debugging they reduced the number of faults found at customer sites from over 1000 in 1981 to less than 20 in 1994. They stress the need for measurement, estimation, and planning plus a separate quality control team. They have a refreshing piece of advice that might account for the reduction of customer discovered errors:

          "If you detect too many faults reconsider design regulations, procedures, and management policies. Never blame the programmer for the faults" [OnomaYamamura95]

        So, it looks as if organizations producing software can change for the better - tho' different organizations may have different ideas about what is better! Individuals can also improve themselves - by using a similar process: monitor, record, analyze, try out, change,... . Watts Humphreys has formulated a Personal Software Process(PSP) that lets individuals and small groups

        • (1) note what they do,
        • (2) collect data,
        • (3) improve, and
        • (4) make accurate estimates of time and cost [Humphrey94]
        Keeping a records and journals of my activities as a programmer(1965..1975) helped me and inspired my search for better methods(1966..1995). Knuth also does this and published has the results. [Knuth89]

        2.2 "Look what we've done!"

        Software professionals have developed a new technology that includes development and personal productivity tools, Graphics, Data Base Management Systems and end user programming systems. Multi-user, multi-processor, and multi-programming systems are in wide use and connected together by local and wide area networks. We have many tools and techniques including: Compiler-compilers and lexical analyzers, generic operating systems and environments, shells, editors, CASE(=Computer Aided Software Engineering), prototyping systems, Object Oriented Programming Systems, Expert Systems, and Logic Programming. As De Grace & Stahl put it:
          "We, as craftsman, have already changed our world. As engineers, perhaps we can remake it." [DeGraceStahl90] (p 10).
        We can be proud of our solutions but Robert De Millo(Director US National Science Foundation's Computer Research Division (1989-91) thought:
          "at some point software engineers have to step back from what they're doing and look at their problems" [Gruman91]
        Worse:
          "There is an enormous disconnection between what academician and consultants think will revolutionize development and what actually works in the trenches. We have much to learn about what really works, what works a little, and what does not work at all." [Andriole95]
        This book takes a broad look at the theory and practice of Software Engineering. The method employed is not a formal scientific procedure. [Fentonetal94] [Kitchenhametal95] The intent is a systems analysis of the existing and proposed software development systems. In other words, I ask "What is here?" and "What is possible?". Once this is understood better we can look for problems and solutions. This chapter is about why we must ask these questions before trying to make more changes.

        2.3 "We're OK"

        Research [AlaviWetherbe91] confirms anecdotal evidence[letters to the IEEE Software magazine and Usenet Newsgroups 1993-94] that some people are happier with their own way of doing things. Others claim a right to practice their mastery using by intelligence and inspiration. [Gelernter91] [Gelernter93] James Bach describes this philosophy as:
          "The 'cowboy' or 'big magic' model. In this view, gifted people create software through apparent magical means, with no particular guidance or support". [Bach95] (page 97)
        He notes that this is a pathological form of a desirable "heroism":
          "Heroism means going beyond the borders of the known world and returning with new knowledge or wealth[...]sustainable, healthy sort of heroism requires judgment to know how much commitment and risk is right for the situation. The movement towards process in our industry is an understandable reaction against pathological heroism: heroism for its own sake, in which over-commitment and uncontrolled risk-taking is the norm." [Bach95] (page 96)

        However, by the end of the 1990's, the above movement towards process ran into a counter-reformation under the banner of AGILE software development. Some forms of agile develop seem to depend on inspiring a heroic response to an impossible situation, [Highsmith99] for example.

        UNIX resulted from such heroism (See Ritchie & Thompson's Turing Prize lectures, also [Garncarz94] ). UNIX is, as a result, powerful, and quirky. It has many bugs and loopholes. [GarfinkelWeiseStrassman94] So, in 1989, a tape worm got loose on the Internet and slowed or stopped hundreds of computers. It exploited both accidental defects and deliberate loopholes in UNIX which had been knowingly left active. [McIlroy90] A recent set of tests showed that about a quarter of the UNIX utilities have avoidable catastrophic faults. [MillerFredricksenSo90] Similar problems are present in UNIX's rivals ( for example [Landwehretal94]) as well.

        Such results have been predicted [Martin85] [Gerlernter89] for this way of developing software. Worse, similar processes can lead to vaporware projects that can last decades - for example Ted Nelson's Xanadu system - or cost hundreds of millions of dollars - for example [Oz94] the CONFIRM hotel reservation system.

        Some organizations are only paying lip service to improving their methods while their practice is still a matter of quick bug fixes. [Hsia93] It is tempting for management to ignore problems or cover up problems [Oz94] Sheila Brady (a successful software manager) calls this denial. She has discussed with Tom DeMarco how and why it can occur. [BradyDeMarco94]

        Kraut & Streeter's study of software work in one large company suggest that senior software management's judgments of projects and products tend to be unrelated to the client's opinions. [KrautStreeter95]

        Robert Glass blames researchers and vendors for creating a "software crisis" to win funds for research or a market for a products. [Glass94b] Others blame the media for sensationalizing failures involving software [ comp.software-eng on Usenet, also [Glass95]

        2.4 "We have a Problem"

        However: much of the software I and others use, administer, and maintain is too imperfect for me to think that there is no room for progress, or that we should sit back and suffer. [Agre95]

        I am not alone. The $125 million canceled CONFIRM project [Oz94] is evidence that the state of the art may not be a rosy as Glass thinks. There is independent evidence that software is less reliable than hardware:- fault tolerant Tandem systems in one period reported 200 problems but more than 70% were definitively software faults while only 13% where definitely not software. [LeeIyer95] Similarly Windows NT was delivered 18 months late, and the features of Windows 95 were once promised for delivery in 1993 [Schindler95] Richard Adhikari quotes a poll by the Software Productivity Group in 1994 of MIS things-to-do lists. The following are listed by more than 50% of those surveyed:

        • Reduce cost of application development (72%)
        • Respond faster to requirements for changes (63%)
        • Involve end-users in development (59%)
        • Use low-cost, intuitive, GUI based development tools(56%)
        • Train IS staff in new methods(53%) [Adhikari95] (Fig p35)
        In other words most MIS managers are wanting to change the way software is developed.

        Robert Glass claims [in Glass 94b and 95] that there is no `software crisis` but a "research crisis". He suggests that the first step to resolve it is for software practitioners to get back their self-belief. However I have seen little evidence (on the USENET newsgroup comp.software-eng, the IEEE and ACM magazines and journals, or the trade papers) that practitioners are any less "can-do" than before. One of the distinguishing features of computer programmers is that they believe that they can solve problems by writing programs. They believe that it is their personal knowledge, skill, and creativity(heroism) can overcome any problem. It is a rare programmer that will admit that the "Best of Practice" is anything other than what they do already. As Leveson & Turner put it in their analysis of the Therac-25 accidents(1985- 1987):

          "The mistakes that were made are not unique to this manufacturer but are, unfortunately, fairly common in other safety-critical systems. [...] It is still a common belief that any good engineer can build software, regardless of whether he or she is trained in state-of-the-art software-engineering procedures." [LevesonTurner93]

        Programming and self-belief are the only things that Microsoft(MS) seems to believe in. In MS, management is seen as a source of bugs slow down the rate at which programmers can produce and test code. [Zachary93] [Pascal94] [CusumanoSelby95] [Keuffel95b] The self-belief of practitioners is confirmed by a project carried out on the Internet by Ted Lewis that indicated that people in the software industry expect a revolutionary improvement in their abilities despite contrary opinions from researchers. [Lewis95]

        Now, Glass suggests that self-belief leads "some programmers to disdain research, as the more radical programmers argue that the researchers[...] are not worth listening to". [Glass95] He places this observation in 2020 but I heard it every week on Usenet in the 1980s and 1990s. Glass finishes his article: "a marriage of theory and practice was the surest way forward for the software industry". He does not mention any of the existing work that has been done by researchers and practitioners working together. Neither does he note that most respected researchers - Hoare, Dijkstra, Parnas, etc. - were all been practitioners before they were researchers.

        From the research side, David Lorge Parnas claims that even the most influential papers in the first seven International Conferences on Software Engineering have had no effect on practitioners. [Parnas95]

        The intent of this monograph is to report on and compare theoretical and practical ways for producing software and extract the "Best of Both." I don't promise a "One size fits all" method. There are too many different types of environment for software people for that! [Bond95] It will be an improvement if we know how to match ideas to situations. For example: We have a well known procedure for producing compilers for programming languages. It would be good to have more statements of the same form: "The best way to procedure for producing an XYZ for ABC is ...." [GlassVessey95] [Jackson94]

        2.5 The Software Traffic Jam

        As in Hoare's time, as in the Shuttle project in the 70's, and as with Word Perfect in the early 90's, much software is stuck in a rush hour "traffic jam". Problems are hitting the producers faster than they can service them [Olsen93] [Olsen94] [Billingsetal94]

        For example, according to Larry O'Brian a burst of late changes contributed to the Denver International Airport fiasco of 1994 [OBrian94] A reliable process needs to lower the rate that mid-project changes are perturb the process and/or raise the rate of handling them as they occur.

        One large customer (the USA Federal Government) published a report that certain software projects costed too much and were either late or never delivered. However these were the worst projects. The percentage of troubled software is not published. [Glass94b] But my experience is that when software is delivered on time and within budget then it needs fixing (cf [Chapman90] and page 90 column 2 of [OsterweilClarke92] ). One operating system I administered in the late 1990s is delivered on a CDROM plus an additional CDROM full of fixes. Once installed and working, new defects appear and new features have to be added. Something similar happens with the system on three different laptops and an iMac I've used since. [CollofelloBuck87] After buying schedulers that failed, CASE tools with bad performance, and compilers that crashed, one manager asked: "So, what do our customers think of us?" [Christensen94] This question (among others) is a key part of SQA(=Software Quality Assurance), see box in section 3 below.

        All software is changed [Brittan80] , pp128-129 of [Dasgupta91] , p268 of [PlaiceWadge93] , page 95 of [Pierce93] and [Billingsetal94] Even on the Voyager probe about to leave the solar system! Often an apparently good program costs more to change than it should. [Genuchten91] So, badly done software means lost time and wasted effort. [Collinsetal94]

        Inexpensive hardware let more people use computers, so bugs are now everybody's problem [CSTB89] [BerzinsLuqi91] [LittlewoodStrigini92] [Matsumoto95] Much software is now too important to be improvised . [Lawson90] [Nissenbaum94] Bugs are becoming a matter of life and death:

          "Computer Technology is growing up. It is no longer an enchanting toy. Nor is it just another commodity. It now performs many functions on which people depend for their livelihood, and even their lives. The computer profession must also grow up. It must look beyond its fascination with technology, and accept with maturity its obligations to society." [McFarland91]

        The worst Allied casualties in the 1991 Gulf War could have been avoided if a small and known bug had been fixed. [LittlewoodStrigini92]

        2.6 Summary

        Software development does have problems, but they can be fixed. The next question is: How?

      . . . . . . . . . ( end of section 2. The State of Computing) <<Contents | End>>

      3. Silver Bullets

        In this section I look at some proposed solutions to the problems of software.

        3.1 More Software!

        For programmers, the obvious way to solve any problem is to write a program. But Fred Brooks cast doubt on the existence of a software patch for the problems of software engineering. [Brooks87] He claimed that the remaining problems are difficult "in essence" and amenable only to hard work and creativity - not to magical, painless solutions. He described and rejected some proposed "silver bullets":
          New languages, Object Oriented Programming(OOP), Artificial Intelligence, Expert Systems, Automatic Programming and CASE, Graphics, Program proving, New environments, tools, and workstations.
        Ed Yourdon echoes Brooks's analysis. [Yourdon92] There have been three replies to Brooks Cox [Cox90a] hyped object oriented technology. (However others [Racko95d] claim that process improvement must proceed object-orientation.) Harel [Harel92] argued that executable formal graphic modeling will improve things. Robert May [May94] presents a philosophical analysis and argues for more support for (1)conceptual thinking, (2) incremental development, and (3) maintenance.

        Brookes did not write about changes in the way we think and organize things. He did not analyze the convergence of methods and techniques illustrated by recent research, for instance: [HigaMorrisonetal92]

        Alan Davis's "Lemming Trails" (page 81 of [Davis93] ) are fads and fashions that they can only make a small improvements but can be disastrous ["Lemmingineering" in the IEEE Software Magazine V10n5(Sep 93)]. DeGrace & Stahl [DeGraceStahl9] refer to "Tool of the Week" and "Method of the Month". Parnas complains:

          "We are putting too much effort into selling methods that are not ready. If they were ready, we would not have to sell them. We should be putting our efforts into testing and improving our technology, not on high pressure sales." [Parnas95]

        Warren Keuffel notes how CASE was hyped as a "silver bullet" but turned out to be "a technology in search of an application". [Keuffel95a] Robert Glass castigates the marketing of untested methods under the banner of scientific research(cf [Fenton94] [Fentonetal94] [Glass94b] [Glass95] ):

          "I think the misinformation chain starts with researchers, who advocate rather than evaluate new concepts. Vendors of products[...] elaborate that unfounded advocacy[...] managers bind practitioners to the misinformed practices[...]. [...] Usually those who cry "crisis" offer technology as the solution to the problem. In this case technology, to a large extent, was the problem." [Glass94a]
        So we have a problem that needs more than a technical solution. Thinking that all problems are technical is a common mistake:
          "The introduction of safety devices for steam engines was inhibited [...] but also by a narrow view of attempting to design a technological solution without looking at the social and organizational factors involved and the environment in which the device is used." See page 71 of [Leveson94]

          "[..]t he first step is not to by a C++ compiler or any other object-oriented language. If you are going to enter a rider into the multi-kilometer Tour de France bicycle race, you first need to make sure he is fit. " [Racko95d]

          "It was concluded that current CASE tools are ineffective because they treat the processes modeled in a naive and incomplete fashion."page 99 of [Krasneretal92]

          "Organizations treat reuse as a technology-acquisition problem instead of a technology-transition problem. [...] organizations fail to approach reuse as a business strategy." page 114 of [CardConnor94]

        3.2 More Method!

        The academic response to a problem is to work on the theory and ignore the practice - Robert Glass has pointed out why this happens [Glass94a] [Glass94b]. He notes (repeatedly) that much published research ignores the current software engineering system. According to Ruben Hersh this is similar to the development of the philosophy of science and the philosophy of mathematics: the theory is developed while ignoring what the scientists and mathematician were actually doing! [Hersh95].

        To get better software engineering we must understand the current system better before rushing in with another quick theoretical or computerized patch [Botting89b] [EnrightWilkens93] [Chusho93] [BradacPerryVotta94] [Bandinelietal95] [Leveson94](pages 69-70). In 1994 Alan Davis proposed two strategies for managers: (1) Get The Big Picture, and (2) Take the Path Less Traveled [Davis94]. The less traveled paths are proven to make small improvements with little risk and cost:


        1. Talk to your Customers/Clients,
        2. Build Prototypes,
        3. Do Trade-Off Analysis,
        4. Play with formal methods(see formal below),
        5. SQA (below),
        6. Salvage.

        The first three steps above were those adopted by Hoare and others at the start of this chapter to turn a failing project around [Hoare81]. This monograph fits Davis's list. It also "looks at the big picture" and tries to "salvage" useful technical and methodical ideas. It tries to fill in some of the gaps between the ideas. It develops a way to think about software and software processes that looks promising.

        3.3 Formal Methods


        (formal): meaning
        Net
          The word "formal" has emerged from the academic ghetto and by the middle 90's has entered the phase in which it is used as a buzz-phrase rather than its original meaning. Something similar once happened to words like "modular", "structured", and "object-oriented". There is a difference in the use of the word "formal" in research papers, and its use by programmers, and methodologists.

          DeGrace, for example, gives the name "formal method" to any combination of structured methods and a linear "waterfall" process [DeGraceStahl90]. Many programmers a "formal process" is seen as any attempt to "tell the programmer what to do" and so a threat to individual artistic license. There is no knowledge of the creative possibillities of true mathemtical thinking.

          I will use the phrases "documented process" or "formal process" to indicate a process that is expected to follow some plan or system that is on record. A "formal method" is a method that explicitly applies mathematical formulae and methods. Such methods can be done with any degree of rigor. I think of this a a scale from elementary, through high, BA, BS and MS level. Our discipline needs us to mix formally presented of ideas with a creative spirit. Professional mathematicians work like this.

          So from this position, a formal method can be used inside an undocumented process. Further a rigorously documented process(the typical accredited CS BS degree for example) that relies on natural language and so is informal!

          Notice: some formal methods have become part of the best of practice (and so lost the stigma of being a "formal method") and other are in the process of being made practical. Others are ignored. The classic "program proving" methods of the 60's for example are rarely used in practice. On the other hand formal languages like Z, BNF, Ada, C, etc are used in industry.


        (End of Net)

        3.4 Software Quality Assurance


        (SQA):
        Net
          Finding errors early saves money - a design defect found in the field can cost 80 to 90 times as much as finding it before coding(page 80 [Kanetal94]). An informal computer aided Quick Defect Analysis of 70KLOC uncovered 480 defects [HowdenWieand94]. Reviews, walkthroughs, and inspections are popular forms of SQA [DavisA94]. SQA means that all work is reviewed by other people. A walkthrough occurs when a producer presents his or her work to a group of colleagues. Fagan adapted engineering peer review into a complex scheme of inspections for specifications, designs, and code [Fagan76] [Russell91] [KnightMyers93]. In an inspection a group of peers and experts examines a document for defects without the authors (or their managers!) being present. Inspections can find 70% of the defects discovered in a project and so lead to a reduction in the total cost [MyersW90b] [Russell91]. Software Quality Assurance(SQA) or Validation and Verification (V&V) has become an important part of software engineering [BasiliRombach87] especially in large companies - for example IBM [Fagan76]. Chow has edited an excellent survey of SQA techniques [Chow84]. Calan describes traditional Quality Assurance in his handbook "The Quality System" [Calan89].

          SQA procedures can also ensure that concerns other than correctness are covered - for example security [Baskerville93], safety [Parnasetal90],

        1. performance, readability, standards, ...

          But SQA does not replace testing. SQA includes testing. The essence of SQA is to discover defects by examining the products not test runs. The Cleanroom people recommend random tests that reflect the usage statistics in the field since these give a lower final Mean-Time-To-Failure(MTTF) than other tests plus a a better predictor of the MTTF after release [Musa93] [Linger94].

          Finally the SQA program must itself be assessed: (1) by seeing how well each step stops defects being passed on, and (2) by recording the reactions of customers to the product. Testing can become a way to assess the effectiveness of the SQA procedures, not an expensive way to find errors.

          Finally, even Extreme Programming, has rigorous (if exciting) SQA inside it: Pair Programming means that all code is instantly under review for clarity, correctness, and maintainability. Refactoring plus "Test First" specifications enable a correctly running program to be improved safely to meet other qualities.


        (End of Net)

      . . . . . . . . . ( end of section 3. Silver Bullets) <<Contents | End>>

      4. A Profession?

        4.1 Introduction

        As a start to understanding the problem, consider software professionals. A mature area will have a complex of overlapping disciplines. They include managers, scientists, mathematicians, engineers, technicians, educators, and craftspeople.

        Applied mathematics can, without loss of generality, be treated as the overlapping parts of the other disciplines for the moment. Pure mathematics is not intended to be useful but will be examined in section 4.5.3. Consider, in turn, ideal or generic managers, artists, scientists, and engineers. A manager (ideally) works with the people, resources, and goals of a project not the details. Managing a project involves the management of those who research (scientists) and those who design and create (engineers and artists). Management is part of the problem: See [Anon90] [Genuchten91] [Royce92], Piere America's comment in [Lutz93] [Andriole94] [JonesC94] [Oz94].

        Management must remove all excuses for failure from the other professionals! They may need to sponsor and support the development of better processes and methods while developing wiser management techniques [Aoyama93] [Olsen93], pages 75-76 [Kanetal94]. Meanwhile they need data - not advertising - on what techniques and technology leading to better software: Compare page 35 of [TichyHabermannPrechelt93]. Hence good management should include testing new ideas before promoting them [Kitchenhametal95]. When new ways are required then they work best when they are owned by those who use them [Billingsetal94]. This is good management technique: Replace control by communication. Compare [BradyDeMarco94] and the recent AGILE revolution.

        Hence management can inspire, structure, and encourage experimentation that help select better techniques. A prime mistake however is to treat the development of software making a device on an assembly line (see page 41 of [Raccoon95]). Good design work is not done on an assembly line! We fully automated the assembly of software in the 1950's. It is the design of the assembly that is time consuming.

        Design can involve scientists, engineers, and artists. All are creative! They differ in their attitudes and habits. For example an art-object - say, a picture in a gallery - can be useless because its value depends on its uniqueness. Some scientific experiments seem equally useless, but their value is in their reproducibility and the meaning of the results to other scientists. Engineers are dedicated to designing and improving useful objects and systems.

        There is an important but often forgotten distinction between artists and the other sub-disciplines. An artist is deeply involved in the production of art objects. However managers, mathematicians, scientists, and engineers do not normally produce anything but bits of paper! It is hard to see, from this point of view what a manager contributes to the profit of a company... it is even harder to see how research and development has any value in the short term. It is vital to realize that engineering and science have long term payoffs rather than producing larger profits immediately. As James Horning points out that if we think that "Widget engineers" make widgets we start to measure their productivity by measuring and counting the widgets [Horning94].

        In fact, "bridge engineers" design and supervise others who produce the bridge. Study the Brunels in the UK, the Roeblings in the USA to see examples. This continues in Civil Engineering to this day. This is even more true in a mass market where three or four engineers can spend years designing a single product - and other people actually produce the millions of copies that are sold. Perhaps we should stop confusing the engineering of software with the production of software? Compare [Raccoon95].

        Unlike most artists, (but see [ Footnote 3 ] ) the engineer and scientist try to cope with complexity using mathematical techniques to express qand solve problems. Engineers and scientists work with what is and what is believed. Both use an iterative process ending with results that match predictions. A difference between the scientist and the engineer whether they change reality or theory [Dasgupta91] p xvii, p8, pp353ff.... The engineer intervenes in reality using a fixed theory, see p28 of [Potts93], but the scientist modifies the theory to fit observations. The engineer needs reliable established laws [Hoare93] p1 (Introduction), [Glass94b] [Leveson94] pp69-70. But a scientist must probe the unknown by seeking cases that do not fit the accepted theories [Popper57]. So a good scientific experiment is often unthinkable to an engineer( See level 0 in [Wichman92]). Heroic engineering - "going beyond the borders of the known world and returning with new knowledge or wealth" [Bach95] - is not the norm. Everyday engineering is more about the creative application of what is known than the exploring of unknown technology. In fact, engineering failure leads to a scientific breakthrough [Petroski85]. So true scientists seek the unknown, but engineers avoid it.

        Further a good piece of engineering can be a bad piece of science. The engineer is happy to of got something that works out of a project and perhaps to have learned about other things that do not work: "...the Darlington experience was not an experiment: we did not gather data or make scientific observations. There was a job to be done as quickly as safety considerations would permit " [Parnasetal94] page 958.

        Thus when an engineering team reports that they used a new method or technique and it worked for them then this is a significant piece of data to other engineers. It is not an acceptable scientific result unless the scientific method was followed. To be of scientific value it is necessary to plan at least two different different approaches(treatments) plus carefully measured results and statistical tests [GlassVessey95] [Kitchenhametal95]. However, when two or more projects succeed by inventing a similar technique or method then this may carry considerable weight and inspire other engineers to try the same thing - even in the absence of "real experiments" or a logical theory. It appears that C++, Java, Perl, and other languages were spread by this kind of mimetic process.

        As an example - two recent projects failed to use mathematics successfully unless the formula were presented as tables [Levessonetal94] [Parnasetal94]. This does not prove tables superior to formulae. It does not make it more probable that tables are better. It just shows a possible truth. It suggests that it is unduly academic to develop methods that require engineers to not use tables. However it would be an overly scientific engineer who didn't not try out similar notations on similar tasks. But, as C Michael Holloway pointed out, experience can only generate a possible truth, for a probable truth we need experiment, and for truth that is absolutely true in a limited context we need a logical theory. Absolute truth is the gift of the divine. All we get from human authorities is "opinion" [Holloway95]. Or as Watts Humphreys says: "In god we trust, all else bring data!"[PSP courses at CMU SEI 190's].

        On the other hand, a piece of research involving careful thought, mathematics, and scholarship, followed up by a set of experiments, and the development of some training programs and university courses may make scientific sense and be ignored by engineers. Here is a recent (randomly chosen) sample:

          "There is statistically significant experimental evidence (from studies of student programmers) that adhering to certain major tents of the RESOLVE discipline reduces software cost and improves it quality[...] but practical validation of the approach still needs to be obtained by careful empirical studies of additional and larger-scale programming efforts. These can not be carried out in a purely academic setting, but will require close collaboration with industrial partners."
        [Ogdenetal94]

        The problem is that RESOLVE is so complex that it takes many hours of study to grasp it. It also opposes several items of "conventional wisdom". There is no convincing experience suggesting it may improve an engineer's work. So an academic in this situation has a moral dilemma: They have convinced themselves that they have a better mouse-trap, and believe that there is a plague of mice outside their laboratory(indeed they see the mice in their lab equipment!). They have to either (1)let people suffer from mice without the benefit of better mouse traps or (2) they must advertise their mouse traps. These pressures, as well as economics, career, and ego, all tend to encourage the environment of strident advocacy that some complain about[ Holloway, Glass, Fenton, etc].

        The time-scales for an engineer and a scientist are different. An engineer expects to develop prototypes on the way to a product - "Plan to throw one away - you will anyway " [Brooks82] compare with [Humphreyetal91] p91, col 3. A scientist is under pressure to "publish or perish." So the kind of experiment that a scientist can finish and publish quickly (a toy(easily described) problem, using students, ...) is not a convincing proof of a method for an engineer [Fentonetal94]. So pure science is most effectively used to produce fundamental results not new products: "the most useful inventions are based on, or improved by, scientific knowledge. Invention produces products, techniques, and tools. Science produces the knowledge and ability to evaluate and improve our products, techniques, and tools. Inventors use science to build better inventions, to know that they are better, and to compare them to what we all ready have." [Leveson94] pp69-70.

        Hoare suggests that it costs more to make a product that fails than it does to develop new theories, test them by experiment and peer review, and then develop designs based on these theories [Hoare93] p2. Engineers understand that it is even cheaper to use other people's theories. So, it is to the engineers benefit that he or she does not have to bother with developing the theories that then let them design a good product. Pure or fundamental research is by definition useless! An engineer may be capable of research but tends to feel that it is better done by others.

        Scientists also benefit when they are not pressured to produce practical or profitable results. Historically, new theories often come from ignoring profitable practice - for example no 19th century medical doctor would have considered funding the research in physics that lead to a valuable tool of 20th century medicine: X-Rays. Again the most fruitful areas for a scientist to work in are the areas of doubt and uncertainty, the frontiers of knowledge. However these are precisely the areas that a good engineer tries to avoid when trying to find a reliable solution to a client's problem.

        Science is evaluated by scientists and practice is developed by engineers. Glass is wrong to imply that bad theories are those that are not immediately profitable [Glass94b]. The pragmatic value of a theory may develop over 5 to 10 years of industrial development[ see time-line from the NRC/CSTB 1995 report, Hamilton 95 page80]. The immediate value is estimated by (1)the scientific method, (2)peer review, and (3)replication of the results by others.

        To Summarize:

        1. Art is anything you can get away with(and sell!) [ Footnote 4 ]
        2. Scientists experiment to improve science(and win grants!).
        3. Engineers use science to solve problems(and make money) compare [RoselBailes92].
        4. We need engineering and science as separate but communicating processes.

        4.2 Science vs Engineering and This Monograph

        In this book I am interested in re-engineering the way we do software, rather than developing a grand scientific theory: It will be enough to make some suggestions that have been shown to work and have a theoretical basis. The evidence comes from many sources and has different values. Some is "scientific" -- based on theory and experiment in a laboratory. Some is a report from industry on an experience -- which may or may not be repeatable elsewhere.
        (evidence): sources classified:
          I have tagged my citations with a type indicating the evidence provided. For example, "=THEORY" implies some kind of fairly rigorous reasoning, "=EXPERIMENT" indicates some scientific quality but a distance from practical "=EXPERIENCE". EXPERIMENTs rigorously compare several effects under controlled circumstances - they indicate causality but may be removed from industrial practice (Box Girder bridges worked well in the laboratory but turned out to be significantly weaker when welded by the road makers, for example). CASE-STUDIES are undertaken in the engineers world but compare several alternatives - they can not be as rigorous as laboratory work, but are useful ways for methods and techniques to be evaluated in practice.

          SURVEYs compare several different projects, methods, or publications - they can be useful ways to get a picture across many projects but they do not help to establish and rigorous results. POLLS select a collection of people in the field and ask their opinions on a set of questions - these are rather like looking at a weather vain to see which way the wind is blowing. Much published work undergoes private and public REVIEW - where another worker in the same field comments on one or more pieces of work - papers, books, and even pieces of software. A result can also be tagged as THEORY - this means that it is established within a particular logical or mathematical universe. Within that universe's assumptions, it is guaranteed. But there is no evidence for thinking that the result's world is the world in which the engineer is working.

          However a good theory is quite useful to good engineers - for example recalling the Venturi Principle of Fluid Flow could have saved MIT repeatedly having to repair the windows in two close by towers when the wind blows. Finally there is ADVERTizing: Here a particular method or technique is described and claims made for it that are not substantiated by either rigorous testing or theorizing.

          For example consider Structured Programming. It starts with the proof of the THEORY that GOTO's are not needed to write programs by Boehm & Jaccopini. A new language DEMOnstrates this theory quite well - Algol 60. A letter from Dijkstra publishes some anecdotal (and so weak) EXPERIENCE [Dijkstra68b] that GOTOs make software harder to understand. There is then a slew of THEORY published by computer scientists (hundreds of papers per year at its height) defining what Structured Programming was and what the definition implied. Soon structured programming is adopted by IBM as part of "Imporved Programming Technology" (IPT) and the results are published as an EXPERIENCE [McNurlin77]. There are some EXPERIMENTs published on what structures are best understood - by students [Green80]. Luckily there was not a large market for software tools in those days but even so papers were written purely to ADVERTize some language, method, or tool and provides more rhetoric than usable data. The whole fuss died down with "structured" becoming a buzz-phrase the magic ingredient in anything that was being sold.

          Finally "structure" was replaced by "object-oriented" with the whole process was repeated with more ADVERTs. The words "agile", "agent", and "aspect" may follow the cycle in the 2000's.


        The sub-disciplines of Management, Mathematics, Art, Science, and Engineering in this subsection(4.1) are developing in the software world. Mathematics and management are well established. Now I look at the Art, Science, and Engineering of software.

        4.3 The Art of Programming

        See [ Footnote 5 ] for the source of this phrase.

        People sell software with a disclaimer about its usefulness and value. In the same document they forbid copying [Nissenbaum94] p78. Companies go to court to protect its "Look and Feel" [Gries91]. Therefore their software is an art-object. Worse, it is bad art -- Mody points out that few programmers undergo the rigorous regime of practice, theory, and notation expected of a musician [Mody93]. The typical "Hack first, revise later" programmer is not following the process advocated by real artists eg [Stanley94]. Further recent letters to the Communications of the ACM and on the RISKS Internet Mailing List say " I am convinced that the software crisis is a self-inflicted problem resulting from treating software design as a kind of artistic activity. Unfortunately we do not have enough talented artists. What we really need is to replace pseudo-artists by engineers and to treat programming as a normal branch of engineering." [Wagner90].

        " It is my contention that the vast majority of software defects are the product of people who lack understanding of what they are doing. These defects present a risk to the public ..." [Whitehouse91]

        A paper on safety critical software makes the same case: "Traditionally, Engineers have approached software if it were an art form. Each programmer has been allowed to have his own style. Criticisms of software structure, clarity, and documentation were dismissed as 'matters of taste.'

        "In the past, engineers were rarely asked to examine a software product and certify that it would be trustworthy. Even in systems that were required to be trustworthy and reliable, software was often regarded as an unimportant component, not requiring special examination.

        "In recent years, however, manufacturers of a wide variety of equipment have been substituting computers controlled by software for a wide variety of more conventional products. We can no longer treat software as if it were trivial and unimportant.

        "In the older areas of engineering, safety critical components are inspected and reviewed to assure the design is consistent with the safety requirements.[...]

        "In safety-critical applications we must reject the 'software-as-art-form' approach." [Parnasetal90] p639.

        Typically a complex piece of software suffers about two failures for every thousand lines of code per year [PostonandBruen87]. By introducing "sound engineering practices" into the making of an operating system Poston and Bruen claim that they reduced the failures reported by customers down to two(2) during the first year. Linger states that at unit testing, traditional methods encounter 25 to 35 errors per thousand lines of code but in rigorous Cleanroom projects only 2.3 errors are found per thousand lines of code [Linger94]. This improvement is reflected in a low number of errors occurring during use. Similar successes have been reported elsewhere [Endres93]. However current teaching discourages better techniques [Rout92]. Further, students don't want to change from "code & test" [AlaviWetherbe91] p91. Rick Rowe shows that the lack of tools and diagrammatic documentation forces programmers to rely on creativity and visualization [ Footnote 6 ] But we cannot rely on artists alone to produce good software so we must look at the other disciplines: Sections 4.4 and 4.5 look at Science and Engineering as they appear in software projects.

        4.4 Computer Science

        Juris Hartmanis recently chaired a discussion of computer science [Hartmanis94b] based on his Turing Prize Lecture [Hartmannis95a]. It is clear from this that there are many views as to what Computer science is at the research level. Hartmannis sees computer science as a new kind of science where crucial tests of laws be experiment is replaced by crucial new developments being demonstrated. Another author dismisses any choice of "science" vs "engineering" [Wulf94]. The following notes are based on observing and contributing to the development of undergraduate computer science degrees in the late 1980's.

        Computer Science is a science of "what can be computed" [Denningetal89a]. It has standardized Bachelors and Masters programs[CSAB]. Computer Science was born from a discipline(mathematics) that was divided into pure and applied camps [Glass94b] [Jackson94]. It has now become a pure science. This is an essential and inevitable development [Hoare93]. Here is a cartoon of how it happened and why it continues: The purer graduates of Computer Science take advanced degrees and doctorates. The doctoral research has little observable effect on the practice of making software. The graduates of doctoral programs become teachers of computer science. So students are taught by people with little or out of date experience and a preference for pure research. The other Computer Science students (those who are weaker or prefer applying their skills) leave the academic world and then learn the methods used outside - be they good or bad. Methods developed for profit, are not respectable topics of research. So there is little or no scientific evaluation them [Glass94a] [Glass94b]. They are not studied by researchers and so are not improved or understood by them. Further they are not taught to Computer Science students. The students graduate believing that such methods either (1) do not exist or (2) are erroneous and so need not be researched or taught. Some computer scientists even re-invent known methods. after [Botting89b], compare with page 11 of CSSyllabus 92, [JonesC95b]

        The above positive feedback loop has purified Computer Science. The Computer Science Accreditation Board(CSAB) reinforces this process by specifically requiring that faculty have Ph. D's. in Computer Science. Students of Computing learn to write and debug small structured programs, data structures, operating systems components, and compilers [ Footnote 7 ] They learn to work from specifications and to assess algorithms against them [ACM-IEEECurriculum91] [Poole91]. They have minimal experience of standards, quality control procedures, disciplined design, or other professional techniques of an engineer [Petroski85] [WilliamsD95]. They may not have learned techniques that are missing or optional topics in computer science degrees: such as SQA, Structured Analysis and Design(SAD), Entity-Relationship-Attribute modeling (ERA), or applied mathematics. For years [ Footnote 8 ] practitioners have made comments like: "It's my hypothesis that [Electrical Engineers] write better software than people trained in general computer science. If you are an engineer, your formal education has taught you the process of building. It has instilled you with the need to think about design, structure, documentation and prototype." [Sharon91]

        "The 1988 action plan required that [...] software engineers use traditional engineering techniques such as prototypes and experimentation. " [Humphreyetal91], p19

        Computer Science is committed to producing Computer Science Ph. Ds. not engineers. Maurice Wilkes says: "I believe I am not alone in thinking that, as an education, computer science by itself fails to bring out the best in people." [Wilkes91]

        This may be too broad but Herbert Simon argued, years ago, that it is wrong to teach engineering as a science [Simon69] and Denning has recently argued for a reorganized engineering curriculum [Denning93]. Potts notes that 25 years Computer science research has not changed practice very much [Potts93]. Fenton points out that some computer science theories are rushed into practice without testing as an engineering techniques [Fentonetal94]. Hartmanis stresses the value of demos and the absence of experiments to resolve computer science questions [Hartmanis94a]. The expertise of the scientist and the engineer are different [RoselBailes92], page 61. John Carroll denies the use of science to design software [Carroll91] p 166. So, as Computer Science has become better at producing scientists it became worse at preparing people for a profession of developing good software [Rout92].

        4.5 Software Engineering

          4.5.1 Definitions

          A recent definition is
            "Software Engineering - the systemic [Sic] application of methods, tools and technical concepts to create complex, software-intensive systems that meet technical, economic and social objectives[...]"

          [FreemanGaudel91] The IEEE Glossary of Software Engineering in 1983 defined "software engineering" as "The systematic approach to the development, operation, maintenance, and retirement of software." [ANSI/IEEE87] A "systematic approach" can develop in the absence of science but it remains a craft until it is based on science [ShawM89], [ShawM90] [DeGraceStahl90] p5 [Leveson94b]. By 1990 the IEEE definition became: "The application of systematic, disciplined, quantifiable approach to the development, operation and maintenance of software; that is the application of engineering to software." [ANSI/IEEE90]

          An engineer taking a degree in Computer Science notes the lack of Engineering, which he defines by:

          "Engineering is a philosophy that transcends professions. It is a systematic, methodical, discipline, deliberate way of approaching a problem. It is not merely a matter of mechanically following an outline of developments phases, dataflow diagrams and so on." [WilliamsD95]

          Helen Nissenbaum notes that bugs are so endemic using current methods that accountability is reduced [Nissenbaum94] p77-78. Mary Shaw argues that software engineering is in the same position as civil engineering in the eighteenth century or chemical engineering before 1887 [MyersW90] [ShawM8990]. Leveson demonstrates parallels between steam engineering in the 19th century and software engineering in the 20th century [Leveson94]. A CSTB report proposed steps to develop Software Engineering [CSTB89]. In 1993 the Computer Society of the IEEE started work on "Defining software engineering as a profession" [Buckley93]. Software production cannot be "Software Engineering" unless it includes engineering methods - the IEEE Computer Society's Software Engineering Standards committee is studying another possible redefinition of the area as: "That branch of engineering that is concerned with the development, operation, and maintenance of software." [Buckley93]

          I now argue that true Software engineering will need (1) a wider scope than computer science(section 4.5.2), (2) a more formal and scientific basis than the arts(section 4.4.3), and (3) better notations(section 4.5.4).

          4.5.2 Scope

          The British Computer Society and the Institute of Electrical Engineers state "SE is not simply a more organized approach to programming" [BCS/IEE89].

          Programming and Computer Science start from a specification as in [Dijkstra89] [Denningetal89b] [ACM/IEEEcurriculum91]. Engineering does not:


            "It is characteristic of engineering that the problems which it undertakes are never clearly defined to begin with. It is the duty of a good engineer to elucidate the problem, not only to himself but to his customer. He must do this successfully at the beginning of the project. If he fails or makes a mistake at this stage, the true nature of the problem may come to light only on completion of the project. [...] "It is equally important to clarify the objectives - to formulate the criteria by which success of the project will be judged[. . .]qualified by some definition of an acceptable level of achievement, possibly in the form of a priority ordering."

          [Hoare73]

          Engineering includes understanding an initially unclear situation and selecting the best thing to do [Dasgupta91] [RameshDhar92]. Searching for a solution is only part of engineering. There is more to software as well :

          " 'Either I have to learn enough about what the users do to be able to tell them what they want, or they have to learn enough about computers to tell me. ' " [WilliamsBegg93]

          "We need to focus[...] on concepts, models, and principles that capture the essence of [the domain...]"p30 [Zave93]

          "[...] The prototype is demonstrated to a group of customers. A customer remarks that many terms commonly used in his business are reported as spelling errors[...] The customer does not like this and wants it fixed." [BerzinsLuqiYehudai93]

          "The [users] provided an initial description of the new system's intended functions that was, at best, fuzzy. After we translated the requirements into a language the average software engineer could understand, ..." [Staringer94] pp61-62

          "For example, a designer of a facility to reply to E-Mail messages begins with an ill-defined, superficial notion of [how to solve problem]. As the design unfolds, the designer's understanding of the problem and potential solutions improves, and he refines and elaborates the problem definition until a satisfactory design emerges." [Henninger94] pp49-50

          One computer professional has described three complex but successful software projects and shows that they succeeded because they focussed on a "philosophy" for solving the problem that was independent of the technical aspects of the final solution [Lawson90]. Computer scientists once acknowledged the need for analysis [Strachey66]. Dijkstra described analysis as "the conception stage" in 1968:

          " The conception stage took a long time. During that period of time the concepts have been born in terms of which we sketched the system[...]. Furthermore, we learned the art of reasoning by which we could deduce from our requirements the way in which the processes should influence each other [. . .] Finally we learned the art of reasoning by which we could prove[...] " [Dijkstra68c] reprint CACM25 p51, col 1.

          Hoare's description of engineering shows that construction is based on designs that should work:

          "The next step is, in the light of the objectives and the relative priorities, to choose, or invent if necessary, a solution technique[. . .]

          "The consequences of the choice of technique should then be worked out in sufficient detail to be able to predict with reasonable confidence the characteristics of the final product, and it should be determined how well the product will meet its defined objectives. If confidence cannot be established, the previous steps must be repeated, until it is known that the target objectives are at least feasible. [Hoare73]

          Similar, according to civil engineer:

          "Engineering students are taught not only the principles of engineering, but also the discipline and management to go with it[...] This approach builds in quality control from the beginning so that failure upon completion is almost impossible[...]" [WilliamsD95]

          "The first phase culminates in a more-or-less precise specification of the product, which has been agreed with a customer or his representatives." [Hoare73]

          Clearly engineering starts long before the product is constructed. So should software engineering.

          The methods used in UK Government projects(SSADM and SDM) fit Hoare's description. SSADM(Structured Systems Analysis and Design Methodology) focuses on understanding the situation and making detailed graphical and tabular models that are quantified and proved feasible prior to constructing specifications of the programs[See SSADM in my bibliography]. SDM (Structured Design Method) derives a program structure from describing the specified data. Coding is a separate, often trivial stage[See DDD in my bibliography]. SSADM and SDM need to be studied to see what they can contribute to software engineering.

          4.5.3 Mathematics and Software Engineering

          According to H Zemanek: "We may not realize, but a blueprint is a formal definition of the object to be produced..." page 180 of [ShawB75]. Without science, engineering is a craft:

          "Engineers could not exist without mathematics and experimental science. Science deals in pure thought and experimental science is concerned with the laws of nature. In some branches of engineering the dependence is very clear. [...] It has long been accepted that the training of an engineer should include a serious study of mathematics and the relevant science, whatever use he or she may subsequently make of this learning. [...]"

          [Wilkes91]

          Engineering needs to be based on scientific and mathematical theory(page 35 of [TichyHabermannPrechelt93], [Hoare 93]). Disasters occur when engineers either don't have, forget, or ignore science and mathematics [Petroski85] and [ lookup?from=monograph&search=ENGINEER ] Indeed one definition of software engineering is: "The disciplined application of engineering, scientific, and mathematical principles, methods, and tools to the economical production of quality software" [Humphrey89]. The link between formalism and software is so strong that: "Computer programming has invigorated the study of formal systems, not just because a proper formalization is a prerequisite of any implementation, but because good formalisms can be very effective in understanding and assisting the process of developing programs."

          [Backhouseetal89]

          Formulae can be manipulated in a verifiable way:

          "Software engineering without mathematical verification is no more than a buzzword." [Millsetal87]

          Or as Gries put it: "In Summary, Software engineering, Computing, and Computing education all suffer from a lack of basic mathematical skills that are needed in dealing with algorithmic concepts.[...]Let us all learn more about calculational techniques and gain skill with them[...]." [Gries91]

          The IEEE Computer Society appears to concur since in September 1990 they published special issues of "Computer Magazine", "Software Magazine", and "Transactions on Software Engineering" all dedicated to the need and effectiveness of formal [ Footnote 9 ] methods [See FORMAL in my blibiography]. The need for formal methods is especially critical for software engineering. Indeed some computer scientists argues that programs are executable formulae [Dijkstra89]. Formal specifications have contributed to order of magnitude improvements in error rates and doubled productivity [Endres93]. Programming languages try to disguise the mathematical nature of software but are formal languages anyway [Guerevich87]. AI uses logic, Data Base Systems use the theory of relations [Codd],and even fuzzy systems are based on mathematical theories.

          In summary, so far: Software Engineers design formal objects to satisfy informal requirements [Wing90a] [BerzinsLuqi91](pages 1..3) [Dasgupta91] [Humphreyetal91] p13. Therefore formality is added in the software process. Where it appears varies from person to person [Lammers86] [Luckhametal91]. Amateurs, artists, users, computer science freshers, and some programmers improvise programs at the keyboard ["hackers" in [Ince88b], "Old man Babbage" at the end of "Goedel Escher Bach" by Hofstadter]. Some use prototype source code as an executable specification for the final product [Hoare87]. Some programmers produce a diagram of the structure of the components of the program, and then translate it, by a series of steps, into code [See Chapter 1 for an analysis of SAD plus bibliography SAD in my bibliography]. Lawson argues that prior to design, a philosophy should be developed from the problem [Lawson90]. Nielsen is blunt "do as much as possible before design is started" [Nielsen92].

          Some people start from a formal logical specification [Meyer86] [Dijkstra89] [Luckhametal91] [RomanGambleBall93] p 281. For others the first step is work out a formal specification [Endres93] [KayReed93]. Research is progressing on ways to create formal specifications [Berzins89] [Milietal89] [BerzinsLuqiYehudai93], ... plus search for FORMAL in my bibliography. Abbot suggested using a natural language problem description as the source of information that can be formalized as data types, objects, and procedures [Abbot83]. Others start from an informal description of the problem solution and abstract objects, data, and procedures from this [Booch91] [TsaiWiegertJang92]. Compiler writers, Jackson, Warnier, and Orr start by making a formal description of the data (Data Directed Design [ DDD ]). Related to this are methods that concentrate on Requirements Engineering such as SADT, SREM, SAM* etc. [RzepkaOhno85] [DavisA90] [HsiaDavisKung93]. Object technology has moved from programming, to design, towards requirements, and into analysis [Kramer93] [Embleyetal95].

          Dijkstra, Hoare, and many others show that program correctness is always relative to a formal specification. Others are working on deriving the program automatically from a function oriented specification [MorganC90].

          "At the heart of the software problem lies a lack of an adequate means to express and manage well-structured specifications, efficient solutions, precise connections between them, and their evolution over time. This problem manifests itself as intellectual unmanageability and, in turn, uneconomical software development and evolution."page 11 of [Jčllig93]

          Yet specifications can be less than perfect [Poston85] [Collinsetal94] [KayReed93] p630. They may be ambiguous, incomplete, and over-precise [Schneideretal92]. They may also fail to fit all the "requirements": "In general the quality of a software product cannot exceed the quality of its requirements" [Humphreyetal91]p13 "[...]Formal techniques can be applied to verify whether a specification can be executed in such a manner that the requirements are satisfied." [GabrielianFranklin91] "[...] failure to find a proper abstraction in generating a specification is most often due to a lack of grasp of the requirements, on the part of the specifier than it is to some intrinsic property of the specification[...]" [MiliBoudrigaMili89] p.119

          Failing to analyze requirements leads to problems [Golazetal90] [Lindstrom93]. Uncontrolled requirements changes cause errors [Billingsetal94]. Erroneous requirements cause disasters [LemosSauudAnderson92]. "Whenever software considerations were not integrated into system engineering early in the system-definition phase, the software specifications often were ambiguous, inconsistent, untestable, and subject to last minute changes. [...] The 1988 action plan required that software engineers be involved in the systems-engineering process [...] In those cases where software engineers were involved with system design, considerably fewer problems occurred and better products resulted" [Humphreyetal91] p13 and 19.

          Requirements are not arbitrary(p 20 of [Wing90a] [BerzinsLuqiYehudai93]. They don't exist in a vacuum [LieteFreeman91] [Chuso93]. "Software development usually begins with an attempt to recognize and understand the user's requirements[...] Software developers are always forced to make assumptions about the user's requirements[...] Often the user has an incomplete understanding[...]" [TsaiWiegertJang92] "Requirements imply something prescriptive. In early modeling, most of what is done is to analyze the problem area, so many pieces of information are more statements than requirements" [LindlandSundreSolvberg94] Beyer & Holtzblatt show that the better software products emerge when the designers are put in direct contact with the customers and each other, rather than with surrogates such as: a "Software Requirements Specification, a manager, or a systems analyst" [BeyerHoltzblatt94] [BeyerHoltzblatt95].

          Sometimes the current system is the problem:

          "Perhaps the most difficult problem of all is the incoherent, unprincipled, and ad hoc behavior of current [...] systems."p30 of [Zave93].

          Software professionals have rediscovered the need to analyze the current system [CurtisKellnerOver92] [Bernstein93]. Systems Analysis evolved from techniques used in World War II in America. The later history has recently been summarized [Keuffel90] [Baskerville93]. It helps bridge the gap between computer expertise and the expert's environment 25 of [MurphyBalke89]. It maps a "Real World Problem" into a "Problem Domain Representation" Figure 1 of [MonarchiPuhr92]. Analysis starts by documenting the current system and searching for problems and opportunities [See SAD and OOA in my bibliography]. Further, as time goes on the current system contains more software. So recent research has looked at the systematic analysis of the current software - under names like reverse engineering, re-engineering, or reuse [ search for REUSE and RE-ENGINEERING in my bibliography]. Tools that help a programmer analyze software are becoming more sophisticated [BrownP91].

          Another class of methods start from modeling the world outside the software:- Data Engineering [Shuey86], Logic Programming [SchnuppBernhard87], Dynamic Analysis and Design [Botting89] [Lang93], Essential Analysis and Real Time Structured Design [WardMellor85], Object Oriented Analysis/Design/Programming [HalbertO'Brien87] [Booch86], and Requirements Engineering using hypertext [Kaindle93]. Requirements are determined by the environment in which the software must be used: "The complexity software systems must handle is the root cause for uncertain requirements." [Bernstein93] "Programmers are always surrounded by complexity; we cannot avoid it[. . .]" [Hoare in ACM 86].

          "One of the nastiest challenges encountered in computer systems is coping with unusual events, particularly in situations that were not completely understood by the system developers." [Neuman91]

          The bottom line is how well a piece of software fits its environment - having "Requisite Variety" [RossAshby56]. Goodness of fit depends on how well the engineer modeled(understood) the environment and in turn how well the requirements and specifications fit the model [LieteFreeman91]. As Peter Drucker put it in a recent interview:

          "There is very little joy in heaven and earth over an engineering department that, with great zeal, great expertise, and with great diligence, produces drawings for the wrong product" [ Wired July/August 1993 page 83].

          Formal logic and mathematics help us get a handle on complexity. A logical model of the users' and clients' universe summarizes what is known about the software's environment. The problems and solutions can be shown to fit the environment. The solutions must match specifications. Lastly the software has to be correct versus the specification. The chain from environment to product can be no stronger than its weakest link [Petroski85]. To strengthen the chain, clear and verifiable links connecting the user and the code.

          4.5.4 Graphics and Engineering

          Gries stated: "The main activity that is supposed to separate engineering from other fields is design: the actual activity of preparing plans for some object[..] "page 54, [Gries91]

          Design is the thinking that distinguishes the professional from the amateur. As Capers Jones wrote: "Programming is an intense mental activity that requires some periods of quiet concentration without interruptions." [JonesC95]page 77 cf [LeonardDavis95]

          We know that an interrupted process needs to store its state therefore we need a way to store our designs. So one prerequisite is an effective way to represent designs[see bibliography for chapter 7 in Chapter 9]. Once the programmer worked from a flowchart but High Level Languages and pseudocodes replaced the flowchart. In 1983 Ramsey, Atwood and Van Doren showed that using standard flowcharts lead to significantly fuzzier designs than PDL [RamseyAtwoodVanDoren83]. Using design languages makes it is easy to confuse the representation and the object.

          Now, Backus, Hopper, etc. invented "programming languages" to code known solutions: (1) formulae (FORTRAN) or (2) documented procedures (COBOL). Numerical analysis was a distinct process done before coding numerical problems [Jackson95]. Similarly work study, cybernetics, systems analysis, and/or operations research enabled efficient solutions to be discovered and then coded in COBOL. Now-a-days programmers try to use a favorite programming language to solve the problem directly. Trying to return to this earlier style Donald Knuth has created tools for "Literate Programming" [see LITERATE in my bibliography]. Others try to use non-sequential texts to describe software [BrownP91] [CybulskiReed92] [Kaindle93] [Nielsen92].

          In Software the representation is the product. This distorts solutions to fit the programming language. Even layout can make a difference: experiments show that programs laid out in the traditional "structured" style are harder to understand than those organized like a book [OmanCook90] [BaeckerMarcus90]. Languages can even constrain specifications:-

          "[...] programming languages tend to encourage overly restrictive specifications" page 15 of [Lamport86].

          Programmers unfamiliar with non-sequential mechanisms fail to find the best solutions to some problems

          [McIlroy86] [Hoare78] [Hoare79]. Kenneth E. Iverson argued that"Notation[is] a Tool of Thought" in his Turing Award lecture: "The importance of nomenclature, notation, and language as tools of thought has long been recognized. [...]Concerning language, George Boole in his 'Laws of Thought' asserted 'That Language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally accepted.'

          Mathematical notation provides perhaps the best-known and best-developed examples of language used consciously as a tool of thought.[...] 'The quantity of meaning compressed into small space by algebraic signs, is another circumstance that facilitates the reasonings we are accustomed to carry on by their aid.' [Charles Babbage]. [...]"

          [Iverson89]

          Iverson [ Footnote 10 ] argues for APL. Further modern programming languages are complicated and "If our basic tool, the language in which we design and code our programs, is also complicated, the language becomes part of the problem rather than part of its solution." [ACM86]

          Yet even the best language may not be the best tool for finding and solving problems. It cannot guarantee that:

          • (1) the specification is correct,
          • (2) the design has a satisfactory performance,
          • (3) the design has a stable structure and solves the problem,
          • (4) the right problem was solved,
          • (5) the problem was correctly diagnosed,
          • (6) the situation is understood well enough to avoid surprises(= failures).
          Recent studies show that most requirements languages have a formal and a graphic component [TsePong91] [Hulletal91]. Careful experiments show that flowcharts are read more quickly and accurately than pseudocode [Scanlan89]. This surprises some computer people [Scanlan89] also [Jones81] but it shouldn't:

          "Engineers are used to communicating with each other by diagrams and sketches and, as soon as they saw diagrams being drawn on the face of a cathode ray tube, many of them jumped to the conclusion that the whole problem of using a computer in engineering design had been solved. [...] graphical communication in some form or other is of vital importance in engineering [. . .]" Maurice V. Wilkes's Turing Award Lecture, 1967 [ACM86].

          In 1972, Hoare (speaking to the IERE in London) said:

          "[..]a more-or-less precise specification of the product,[...]. For many engineers, architects, painters, or sculptors the specification may be a set of drawings.[...]

          "It is one of the most unfortunate aspects of computer programming that there is no intuitively acceptable method of summarizing the major external characteristics of a computer program by means of a 2-dimensional picture or a 3 dimensional model"

          [Hoare73].

          Compare this with the following interview between Karen A. Frankel(KAF) and Ivan E. Sutherland(IES): [1988 Turing Prize lecture, Sutherland 89].

          KAF: You also wrote [1966. . .]"Have you ever tried to pick up somebody else's computer program and understand what it does? [...] I want to look at a display of a computer program and recognize that this part must go with that part just as I can match up the gears in a clock"

          IES. I think programming has been treated largely as an alphanumeric task, and most of the work in programming languages and most of the theory of programming and so on has been abstract and alphanumeric. People have simply not made pictures of programs very much, probably because nobody has any ideas as to what a program should look like. I think it might be interesting to do, but I don't have any suggestions as to what they look like either.

          Mary Shaw said "One of our problems is that our design notations don't lend themselves to exhibiting and sharing our designs" [MyersW90], compare [ShawM89] and [CSTB 89]. "Hard" engineers always use graphics but, as the above experts noted, "soft" engineers don't have a good standard graphic notation. To make the software process more like engineering, we need better graphics (see p371 [MurphyBalke89]).

        . . . . . . . . . ( end of section 4.5 Software Engineering) <<Contents | End>>

        4.6 Programming as a Craft

        In the last decade of the 20th century a movement emerged from the underground and gained a great deal of attention under the banner of agile methods[See AGILE for sources]. This essentially promoted a crafts-like approach to developing software. The work is done by a team working with minimal paperwork and relying on thorough testing, programmer skill, peer review, and customer feedback to give the customer what they want. Typically the team iterates towards a moving target as the customer learns what they want and the team makes small changes to evolve the code towards what the user finds comfortable. However there is no lack of discipline in most of the processes. In some cases (XP) the process is spelled out in great detail.

        What is shared is the idea of trusting the coding skills of the crafts-people. Indeed some [Martin03b] explicitly use the terminology used in pre-industrial times: "craftsman", "apprentice", "journeyman", and "master" to indicate the way that a craft is inculcated in the beginner. There is an implicit (sometime explicit) rejection of academic schooling and engineering style documentation.

        These methods work well when the project can be handled in small incrments and the user representatives are on hand to test and critique the prototypes. They can fail when the team becomes to big for the available communication technology to handle. Face-to-face communication is a key part of many agile methods. This and code constructed in the correct way replaces traditional documentation. Thus much of the interface documentation in an XP project is in the form of executable unit tests and is written first. Similarly changes to code are always reviewed (in XP) but they are reviewed as they are written by a peer sitting beside the programmer (pair programming)

        4.7 Summary: Art, Craft, Science, or Engineering?

        First, it can be dangerous to treat software as an art. This is especially true when the artist has had no formal training in art. There is an art to creating user interfaces and readable code but artistic programs are not necessarily safe, correct, or complete. Errors can occur because undisciplined creativity does not assure software quality.

        Second, engineering is not a science. Science is essential to, but is not engineering. You don't hire a physics graduate to design an X Ray machine. An engineer has a commitment to usefulness and a scientist to truth. Thus, a computer scientist should not be expected to engineer software.

        Third, when a scientist makes an error, the direct effect is to damage the scientist's reputation, but when an engineer makes an error other people are damaged [Petroski85] [McFarland90] [Shaw91] [Nissenbaum94] [Collinsetal94]. Therefore engineers must get the more rigorous training [Ford91].

        Dijkstra speaks as a scientist when he talks of raising a "fire-wall" between the program and the world in which it exists [Dijkstra89] [Denningetal89b]. Building a firewall around a laboratory does not put out the fire outside the laboratory. Dijkstra is safe but not the users of software because he avoids productivity software Introduction to [Denningetal.89b]. Software needs a fire-proof structure. Fire-proofing knowhow comes from studying fire.

        Twenty-five years of research has had a small impact on practice [Parnas95] because the research has been on the "downstream software activities"page 82 of [MorrisonGeorge95] and scientific [Potts93]. Further the model of science in many computer science papers is closer to that of mathematics than an experimental science [Zelkowitz94]..

        Problems in software arise in the space between computer science and the world. This is where "software engineering" should fill the gap [CSTB89] [Shaw91] [ChangC93] [Potts93]p18.

        Craft is needed in all software development. But it would be a mistake to rely on it if the project is large and life threatening. I hire local painters to paint my house (or do it myself). I do not hire them to choose my color scheme or design and create a university. There is no question that some projects may need more than craft.

        Producing good software needs craft plus either luck or good art, good science, and good engineering. Good engineers reject luck and start from an explicit model of the problem area. We now need to develop software knowhow from (1)best practices and (2)best theories rather than trying to develop more pure theories. Therefore to re-engineer software engineering we must first model the current software process, compare p20 of [Potts93] [Hsia93]. As De Millo said: "You of the community have not set your sights high enough. You've been driven by short-term concerns of other [ non-NSF] funding agencies for a long time. You've been proposing solutions instead of basic problems." [Gruman91] Meanwhile some companies have already already re-engineered the software process to work the way they want it to [Aoyama93].

        Improving the software process starts by stepping back from the day-to-day activity and developing a separate theory of what is going on - where it works, where it fails, and where it costs too much. Chapter 1 is a first step critiquing as many documented processes and methods as possible to find things worth re-using.

      . . . . . . . . . ( end of section 4. A Profession?) <<Contents | End>>

      5. Notations

      Software engineers need a way to reuse thoughts as well as code: "Problem solving shares many characteristics of precious metals. It must be used judiciously, replaced by less expensive resources when possible, and recovered for further use whenever feasible." [Barnesetal91] "We need mechanisms to keep track of the myriad informal notes, drafts, and doodles that act as a shared memory for the team of analysts and customer representatives."p23 of [Potts93]

      There is a need for a computerized notation for specifications and systems. We need to store, retrieve, link, use, and modify facts about software, systems, and data inexpensively but without loss of universality, clarity, or accuracy [Gotterbarn93] [Bellinzonaetal95] [Chauvet95] [Racko95c] The quantity of data is at first sight daunting - one experimental but working system developed 20 pages of printout with dozens of variables, assumptions and theorems in them [Jullig93]. Algebraic notation will reduce the volume. Computers can aid in retrieval, editing, display, and storage of this data.

      Our notation needs a clearly defined, unambiguous meaning. In 1987 a meeting compared a dozen methods for specification and design. It included experts on most methodologies. They chose to distinguish methods by how the meaning of its notation could be ascertained. Some methods used mathematics, some used words in a natural language, and some relied on a computer program to execute and interpret the notation. Most had a mixture of meanings. The methods fit in a triangle [Zave88]. We can place the triangle in a three dimensional space because data can be represented in four formats - natural, tabular, graphical and mathematical.

      "Reviews of our document for correctness by users during development made clear that specifications should include graphic, symbolic, tabular and textual notation depending on the type of information being conveyed. [...] A language that contains only graphics or only tables or only symbolic strings is probably less useful than one in which different notational techniques are used to communicate different types of information." [...] "Although formal specification languages obviously have to be defined in an unambiguous and mathematical way, the syntax itself does not have to contain obscure mathematical symbols that are familiar and comfortable to neither the application expert nor the implementor of the system. There must simply be an unambiguous translation from the specification language[...] to the formal model [...]underlying it" [Levesonetal94] page 704

      6. Can One Size Fit All?

      Glass & Vessey have surveyed various "application domains" as a start towards developing special methods for each domain [GlassVessey95]. They state that there is a lot of work to be done before such domain specific methods can be developed. David Bond has recently pointed out what (in hindsight) is glaringly obvious: Different software projects may well demand quite different processes [Bond95] [Keuffel95b]. He distinguishes four different industries:
      1. Constrained Software: Government contracts and embedded systems.
      2. Internal Client Software: MIS,...
      3. Vertical Market.
      4. Mass Market.

      Bond argues convincingly that it is profitable for people locked to a contract to use a step by step and checkable process that slowly but surely produces something that fits the contract. However the mass market rewards software that is delivered rapidly with lots of features - even if they do not all work as expected or desired. Here there is no contract, just a disclaimer! In the typical MIS situation, the client is available but not able to express in technical terms what they really need - some form of risk management and a spiral approach would seem to be profitable. In the last case: Bond states that he does not know a good way of handling vertical markets - other than creating high powered teams of experts and specialists who work, as a team, closely with the customers[compare with "virtual corporations" in [Olsen94].

      David N Card in an editorial [CardDN95] suggests that number of potential customers and suppliers together determine the kind of strategy that makes the most profit in the long run:

      Neil Olsen take the argument further [Olsen94]. He analyses the economics of software production for his own situation: a vertical market with few suppliers and few customers: telecommunications. He shows that lowering the time-to-market and the lowering the rate of response to changes in general is a key factor to increasing profit. He gives examples of the kinds of tools, techniques, and organizations that can speed things up 5 times. The result is a pile of over-time [Olsen93], over run budgets, less emphasis on quality in a rush to the market and bigger profits page 38 of [Olsen94]. Notice this explains the situation that brought part of New York's telephone system down(Section 2.1 above), and so to a different company moving in the opposite direction.

      The most general form of the arguments in favor of rapid development is the recently developed economic theory of network effects and first movers [Farrell95] [ShurmerLea95] [Smoot95] [Warren-Boultonetal95]. A good or service is said to have a network effect (or externallity) when its value to the customer increases as more and more people also by the same good or service. For example, a telephone becomes more useful as more and more people also subscribe to the phone network. These effects seem to apply to much software - the main mitigating factor being when the software gives an entity a competitive edge not available to its competitors. The first-mover is the first supplier of a new good or service. It is easily seen that when there are network effects the first-mover can win a significant advantage and even become an industry standard - even if later products are better. So there is an economic incentive to push out a barely adequate piece of software ahead of any competition rather than a delayed but better product. These theories have been championed by Brian Arthur under the rediscovered term of "increasing returns" - this argues that in real economic situations(not economic classes!) a positive feedback develops between the market and the product that tends to lock-in a particular form whether it is the best or not. However counterbalancing this, in the long run, are (1) the cost of porting a bad product, (2)anti-monopoly legislation, and (3) possible legal redress by the customer.

      It is interesting to note that in many cases the first mover is not the long time winner. Some examples are: Commodore Pet vs Apple II vs IBM PC, Xerox Star vs Apple Lisa/Macintosh vs Microsoft Windows, Visicalc vs Lotus 123 vs Borland Quatro Pro, Wordstar vs Microsoft Word &/or Word Perfect. The reason may lie in the economics of traditional software production [Warren-Boultonetal95]: The costs of developing software are very much larger than the costs of reproducing, marketing and distributing it. Further these costs are sunk - unrecoverable. In the rush to develop a product quickly there is no chance to produce code or experience that can contribute to future products either. Often rapid coding version gives a less well thought out user interface. Further high speed of development means that the resulting product is typically a set of undocumented, unportable, and unmaintainable code. Now a successful product attracts customers who will demand changes, upgrades, and bug fixes. Further in the long term new hardware becomes dominant. At that point the penalty is paid for rapid initial development in the cost of effectively developing a completely new and yet backward compatible product is incurred. The large cost of evolution in this case (and natural organizational inertia and hubris perhaps) gives new comers an advantage that can be rigorously exploited.

      Notice that in the case when there are many suppliers and many customers we have a recipe for a highly competitive and dynamic environment like those studied by evolutionary biologists like Dawkins, Wilson, ... and students of complexity such as W Ross Asby, Gordon Pask, Brian Arthur, and the Santa Fe Institute. The chances of making a profit and surviving depend on both ones own strategy and the mix of strategies competing with you. The behavior patterns show up as a mix of strategies (in the game theoretical sense) across the population of suppliers. Such systems have the following properties: (1)A random selection tends towards one of a number of stable behavior mixes. (2) there can be many such patterns. (3) the patterns tend to lock-in. (4) A locked in pattern can break down and change to another one - but this is an unlikely event depending on a set of cooperative mutations occurring. (5) The system survives a change in the environment by a shift to a new (and not always predictable) mix of strategies.

      Given this background the development of software markets at least makes some sense - market leaders, shifting dominant paradigms, and so on. However there is little empirical data, as yet, to make such a model more precise and directly useful.

      A final point is worth noting: just because a process is profitable does not make it right. As an extreme example addictive drugs are profitable, but not acceptable to society as a whole. In engineering and consumer products for example the first standards in the USA (The Boiler Code and the UL symbol) were to ensure the safety of products, even at a cost to the producer [Smoot95]. So in safety critical cases there tends to be a rigid contractual relationship set up between a single supplier and a single customer. Such contracts and standards must have an effect on the the way software is produced. Perhaps a fuller model might combine economic models, Bond's model, and Card's model. It would add the ability of the customers to hold the suppliers to a contract or standard (and vice versa) as a third dimension:

      I do not expect a single method for all the above situations. That would be like finding the best way to sort data or the ideal computer architecture for all problems! However we do know that a small set of instructions are quite capable of defining any sort program - indeed any computation. So instead of looking for an ideal software engineering program, I will be looking for a set of operations that can help in any software process. The natural place to find candidate operations is in the existing methods and processes. I will be looking for a set of things that software engineers can do which help them handle the various difficulties in the job. A bonus would be finding a template, framework, or generic pattern that can be tuned to fit each particular environment.

      For more publications providing evidence that "One Size does not fit all" go to ONE.*SIZE to search my bibliography.

      7. Review

        7.1 Conclusions

        (0) Current software is not good enough. The software system needs improving. Good may mean "faster development" or "more reliable". We should use engineering methods rather than pure science or art to do software.

        (1) Engineers try to improve systems, but scientists try to improve understanding. Computer Science can not be Software Engineering. Artistic Programming is "soft engineering" not "Software Engineering."

        (2) Studying something is a prerequisite for improving it. So, analysis and design are a part of engineering. Computer Science does not include analysis.

        (3) At some point and in some notation or other during the software process, unclear ideas become formal. Engineering demands discipline and a set of standard (formal) notations for it to be safe.

        (4) To engineer an improvement in the software process means analyzing it first. Better notations and methods should follow from the analysis.

        7.2 Preview

        The facts and theories of the current software processes are like a scattered jigsaw puzzle! I found vital pieces in unexpected places. Some famous pieces (for example, software metrics [Fenton94]) turned out to be almost entirely blue sky. Chapter 1 presents popular methods, their problems, and what can be got from them. It outlines a missing piece. It is not a magic bullet: All that was needed was to connect Computer Science, tools, and proven techniques to give a generic software process that can be fitted to most situations. Chapters 2 through 7 demonstrate an experimental replacement for the missing piece. Chapter 8 proposes actions.

      . . . . . . . . . ( end of section 7. Review) <<Contents | End>>

      FOOTNOTES


      (Footnote 1): See the bibliography for this chapter in chapter 9 for references. Also subject RISKS in my bibliography.


      (Footnote 2): Was it worth it? I invested in the product shortly after the change has used it ever since. Earlier version were not worth the cost.


      (Footnote 3): This does not stop artists doing excellent mathematics, however: M C Escher, The Perspectivi, etc developed the mathematics they needed for their work. Mathematicians can be as creative and aesthetic as artists!


      (Footnote 4): From p132-136, "The Medium is the Massage," Marshall McLuhan & Quentin Fiore, Penguin Books, UK.


      (Footnote 5): Knuth is the arch-prophet of programming as an art [Knuth69]. "Literate Programming" proposed an explicit art form [see Van Wyk's Columns in Commun ACM 1986- 88] [Knuthetal89] [CordesBrown91]. \TeX had 800 changes done to it using literate programming [Knuth89]. But what artist keeps track of their errors?


      (Footnote 6): posted to Netnews comp.edu June 92 from rowe@netcom.com, quoted with permission.


      (Footnote 7): Adding a "large" programming project to a one term course forces the students to spend more time debugging and less time understanding the problem, thinking, planning, etc. It reinforces the artistic approach [Rout92]. "Instructors are often unwitting perpetrators of pedagogical ineptitude because that was how it was done when they themselves were students in a project-based CS classes" [Poole91].


      (Footnote 8): I first heard that "non-computer scientists make better software people" in 1971 and acted to change the program I was involved in at that time - with some success.


      (Footnote 9): I'd define 'formal' as using formulae and following rules - a logic or algebra or calculus. I do not mean rigid because I include the creative and agile use of mathematical thinking as vital component. The formulas and rules provide shorthand notation and shortcuts that work.


      (Footnote 10): For more on J (once APL) see the IBM Systems Journal 1991,Vol 30, No. 4. Pages 574-575 are quotes on notation.

    . . . . . . . . . ( end of section Why we need to Analyze Software Development) <<Contents | End>>

End