Skip to main contentCal State San Bernardino
>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [papers] >> rjb95b.one.size
[Index] [Contents] [Source] [Notation] [Copyright] [Comment] [Search ]
Fri Jan 20 11:46:44 PST 2006

Disclaimer. CSUSB and the CS Dept have no responsibility for the content of this page.

Copyright. Richard J. Botting ( Fri Jan 20 11:46:44 PST 2006 ). Permission is granted to quote and use this document as long as the source is acknowledged.


Contents


    Can One Size Fit All?

      Author

      Dr. Richard J Botting, Computer Science Dept, San Bernardino, California State University, San Bernardino.

      Status

      Presented in the Computer science Faculty Research series Oct 10th 1995. Revised Feb 28th 1996. Corrections from js@cs.nmt.edu (John Shipman) Sun Mar 31 14:02 PST 1996 [ rjb95b.comment.1.txt ]

      Revised July 4th 1996 to include information on consumer electronics and also on CMM & the evolution of standards. [ rjb95b.comment.2.txt ]

      In January 2006 I had to change the above links to fit a change in the local server.

      Abstract

      Recent research suggests that producing software in different situations demands different processes. This means that a given method may be ignored not because the practitioners are ignorant or stupid but because it does not fit - that implementing it would be an unnecessary risk. Classical and new economic theories show that a profitable strategy for one market is a losing strategy in another one. Theories of evolving systems suggest to their environment. We may need "bespoke software engineering" where researchers, consultants, and organizations are expected to evolve different development processes to fit different situations.

      Introduction

      In my recent reading (1995-96) I noticed a series of publications(see [ Bibliography ] ) that are all saying the same thing:

    1. Different ways of developing software are best for different situations.

      Another set of papers and articles imply that the community of scholars researching software engineering are failing to communicate their discoveries effectively to people doing the work [ Parnas 95] . Robert Glass states that academics are proposing theoretical methods that have not been tested in practice. He claims they can not work. He advocates empirical research based on what can be tested in practical situations instead.

      My thesis is that these two patterns fit each other. So research and consultancy should start to focus on matching strategies to different kinds of situations. This means we must study and catalog the different kinds of situations in which software is developed. We can extract new methods from current practice. We also need new methods for looking at existing an software development system and finding ways to to improve it.

      This paper is a quick tour thru the dynamic environment of software development in theory and practice.

      Application Domains

      Glass & Vessey have surveyed various "application domains" as a start towards developing special methods for each domain [ Glass & Vessey 95 ] They state that there is a lot of work to be done before such domain specific methods can be developed. David Bond has recently pointed out what (in hindsight) is glaringly obvious: Different software projects may well demand quite different processes independently of the application domain anyway [ Bond 95 ] , [ Keuffel 95b] .

      Contracts and Users

      David Bond distinguishes four different industries:
      1. Constrained Software: Government contracts and embedded systems.
      2. Internal Client Software: MIS,...
      3. Vertical Market.
      4. Mass Market.

      He argues convincingly that it is profitable for people locked to a contract to use a step by step and checkable process that slowly but surely produces something that fits the contract. However the mass market rewards software that is delivered rapidly with lots of features - even if they do not all work as expected or desired. Here there is no contract, just a disclaimer! In the typical MIS situation, the client is available but not able to express in technical terms what they really need - some form of risk management and a spiral approach would seem to be profitable. In the last case: Bond states that he does not know a good way of handling vertical markets - other than creating high powered teams of experts and specialists who work, as a team, closely with the customers. This should be compared with the idea of "virtual corporations" in [ Olsen 94] .

      In a sense Bond extends an idea proposed in the late 1970's by Manny Lehman [ Lehman 78 ] and [ Lehman 95] . Lehman suggested that there were two types of software: E-type and S-type. E-type software is produced because it satisfies some real world need and so is forced to evolve as the reality changes. As an example embedded code must fit the hardware it is placed in and must change if the hardware is changed. DP and MIS software must satisfy the organization changes. Finally, mass-marketed software is sold to users who buy it to satisfy some personal need. He points out that the vast majority of software is like this.

      Lehman's S-type software is an implementation of a piece of mathematics and is judge to be correct vs this specification. These are mainly found in computer science departments and laboratories. For example a floating point package may be judge correct versus the IEEE standard for floating point and yet the standard is also likely to change and so force the software to evolve. Even the standards governing system software - TCP/IP, Ada, POSIX, ... all evolve.

      Suppliers and Consumers

      David N Card in an editorial [ Card D N 95 ] suggests that number of potential customers and suppliers together determine the kind of strategy that makes the most profit in the long run:

       [Figure]

      Here when there is little competition, but many customers, then the first product into the market has a key advantage.

      First to Market Wins

      Neil Olsen take the argument further [ Olsen 94] . He analyses the economics of software production for a vertical market with few suppliers and few customers: telecommunications. For Neil Olsen producing software means handling changes faster than they happen without adding new changes at the same time. He models this by using queuing theory. A software process has to handle an incoming queue of changes.

      He shows that lowering the time-to-market and the rate of response to changes in general is a key factor to increasing profit. He gives examples of the kinds of tools, techniques, and organizations that can speed things up 5 times. The result is a pile of over- time, over run budgets, less emphasis on quality in a rush to the market and bigger profits (page 38 of [ Olsen 93 ] ).

      Nine years ago it was being argued in the trade papers that delivering software rapidly had advantages within the MIS-type environment[for example Richard Lefkon's Article "Speeding Software Delivery", Computerworld (May 12 1986) pp97-108]. The Idea of "Rapid Application Development" has since become a popular if controversial goal. It appears - I have no real data to prove this, however - that many practitioners want to reduce the time needed to develop software rather than change any other quality of the product - such as maintainability, readability, correctness, etc..

      However Steve McConnell [ McConnell 96b ] argues that there is evidence to show that software can be developed most rapidly by focussing on removing defects early. He claims that the fastest time to market occurs when approximately 95% of the defects in the software are detected and repaired before the product is released and the reamaining 5% are found by customers. From this point of view there is a trase off between the time spent finding defects early and the time wasted reworking defects that are found later. It is also intuitively obvious (but unchecked) that it takes more time to find defects as they are removed - a form of diminishing returns. Hence when the situation demands rapid development it pays to control quality up to a certain extent and let the user discover a few remaining defects.

      It is obvious that many companies do need to develop software quickly. The most general explanation for this phenonomen is based on the recently developed economic theory of network effects and first movers [ Farrell 95 ] [ Shurmer & Lea 95 ] [ Smoot 95 ] [ Warren-Boulton et al 95] .

      A good or service is said to have a network effect (or externality) when its value to the customer increases as more and more people also by the same good or service. For example, a telephone becomes more useful as more and more people also subscribe to the phone network. These effects seem to apply to much software - the main mitigating factor being when the software gives an entity a competitive edge not available to its competitors. The first-mover is the first supplier of a new good or service. It is easily seen that when there are network effects the first-mover can win a significant advantage and even become an industry standard - even if later products are better. So there is an economic incentive to push out a barely adequate piece of software ahead of any competition rather than a delayed but better product. However counterballancing this, in the long run, are (1) the cost of porting a bad product, (2)anti-monopoly legislation, and (3) possible legal redress by the customer.

      These theories have been championed by Brian Arthur under the rediscovered term of "increasing returns". He argues that in real economic situations(not economic classes!) a positive feedback develops between the market and the product that tends to lock-in a particular form whether it is the best or not. [Complexity, Wired??] This leads to deep waters, [ Complexity ] below.

      First to Market then Loses

      It is interesting to note that in many cases the first mover is not the long time winner. Some examples are: Commodore Pet vs Apple II vs IBM PC, Xerox Star vs Apple Lisa/Macintosh vs MicroSoft Windows, Visicalc vs Lotus 123 vs Borland Quatro Pro, Wordstar vs Microsoft Word &/or Word Perfect. The reason may lie in the economics of traditional code-oriented software production [ Warren-Boulton et al 95] .

      Costs

      The costs of developing software are very much larger than the costs of reproducing, marketing and distributing it. There are detailed models relating measures of the size of a project to the ammount of effort needed to develop the software [ Boehm 80b] .

      Some doubt has been cast on the simpler models where the logarithm of the cost is proportional to the logarithm of the size of the project. Further the parameters in the models vary widely from one organisation to the next [ Banker & Kemerer 89] .

      However the exact model does not matter for my analysis. What is important is that these costs are large and unrecoverable. In economic jargon I assume that the costs are sunk. This seems reasonable because in the rush to develop a product quickly there is no chance to produce code or experience that can contribute to future products. Now, a rapidly deployed product can not have a good user interface, and its designed is entangled in the code for the logic of the problem. Further high speed development means that the resulting product is typically a set of undocumented, unportable, and unmaintainable code.

      Cost of Software Production

      I am now going to take a few tentative steps in the direction of quantifying the argument that a quickly developed product may win the market, but loose in the long run. What follows is not quite traditional economic analysis but is inspired by it. For example, traditional economics uses similar graphs but with the X and Y axes interchanged.

      Suppose that it costs D to develop a new piece of software and r to reproduce and distribute a copy, then the total cost for n copies is

    2. C = D + r n.
    3. C::dollars=total cost of n copies,
    4. D::dollars=development costs,
    5. r::dollars=reproduction and distribuion cost per copy,
    6. n::number=number of copies.
    7. dollars::=the real numbers accurate to two decimal places.
    8. number::=the integers greater than or equal to zero.

      The average cost per copy, c, is

    9. c = r + D/n.
    10. c::dollars=average cost per copy.

      Notice D is hundreds or thousands of times more than r. Further notice that the price has to be larger (but not too much larger) than this number for a profit to be made.

      [Figure]

      Value to the User

      The situation for the potential buyer can be modeled by the value or utility of the piece of software - this should include tangible values in terms of working faster and intangible values such as being fashionable forward thinking, etc.. Clearly people will tend to buy if the price is less than this value. Since this depends how many other people buy it, let it be as a function of the number of copies sold v(n).
    11. v::number->dollars=value of one copy when (_) are sold.

      In classical economic models v(n) was thought to be decreasing or constant in Brian Arthur's models it is increasing. For software the more people buy a piece of software the more value it is to the user. So the shape of v(n) curve is not clear it is certainly a monotonic increasing function of n. There is a theory that a typical software product selling curve is roughly a normal or Gaussian distribution: A tail of early adopters....thru to tail of die-hard non-adopters. If this is so then perhaps the v(n) curve may be an S-shaped error function type curve.

      We can plot the value curve on the previous graph - assuming that the minimal value (v(0) is greater than the cost of duplicating a disk!):

      [Figure]

      How Can the First Mover Lose?

      The fact is that several first products have not become the long term winners. Here is a detailed theory that might explain why.

      First notice that as the number of uses increase, so does the pressure of the user on the producer to develop changed systems. If the chance of a single user requesting a change is p then n independent have a 1-(1-p)^n probability of demanding change and so, given a constant cost of making a fix of f(D) for given development cost of D.

    12. p::probability=chance of a change being required by a user,
    13. f::dollars->dollars=cost of fixing given development cost of (_).
    14. probability::=The real numbers in the range 0 to 1 inclusive.

      The cost per unit adjusted for evolution, c' is

    15. c' = r + D / n + f(D)(1-(1-p)^n)/n.

      or

    16. c' = r + D / n + p f(D) + terms in p^r

      For small p,

    17. c' = r + D / n + p f(D).

      In other words, a constant overhead is added to the reproduction and distribution n costs because of the need to make changes(evolution):

    18. c' = r' + D'/n

      Now a successful product attracts customers who will demand changes, upgrades, and bug fixes.

      [Figure]

      Now a rapidly coded product may have less well thought out user interface plus more bugs and misfeatures. So the perceptions of the buyers as the word gets around tends to reduce the value of the product. However this does not put off the early adopters - who are often more motivated by the desire for any new product than a good one. So the v(n) vs n curve is unchanged at the left but falls on the right.

      [Figure]

      Notice that decreasing the development cost, even allowing for the long term penalties, has increased the chances of the software being a runaway success. One starts to see why companies will hire the cheapest labor and work them as hard as possible to get a product out of the door before any one else does.

      Further in the long term new hardware becomes dominant. At that point the penalty is paid for rapid initial development in the cost of effectively developing a completely new and yet backward compatible product is incurred. The cost of developing the new version will depend on the internal qualities of the original. Further the value of the new version (which is almost certainly incompatible with the undocumented previous release) is lower, and if it starts to sell starts to reduce the value of the previous one as well. If we assume that the process keeps no records or documentation and is focused only on code (a cheaper faster process!) then we see that the costs of the second system are higher than the first one... Especially if we allow for Brook's law that the value of a second system is lower because it merely fixes up the old one and has no innovations.

      [Figure]

      The large cost of evolution in this case (and natural organizational inertia and hubris perhaps) gives newcomers an advantage that can be rigorously exploited. A recent proposal for a new kind of intellectual property law, [ Davis R at al 96 ] stresses the vulnerability of software to the speedy appearance of cheaper products with equivalent behavior. The second mover has the advantage of 20-20 hindsight: It does not have to evolve a product that fits the clientele. There is already a successful proven prototype and people trained to use it.

      Profit and the Common Good

      Notice: just because a process is profitable does not make it right. As an extreme example addictive drugs are profitable, but not acceptable to society as a whole. In engineering and consumer products, for example, the first standards in the USA (The Boiler Code and the UL symbol) were to ensure the safety of products, even at a cost to the producer [ Smoot 95 ] , cf [ Leveson 94] . So in safety critical cases there tends to be a rigid contractual relationship set up between a single supplier and a single customer. Such contracts and standards must have an effect on the the way software is produced.

      Perhaps a more complete taxonomy might combine economic models, Bond's model, and Card's model. It would add the abillity of the customers to hold the suppliers to a contract (and vice versa) as a third dimension. A specification has a high contractual power if (1) the user is protected from errors in the product by the contract, and (2) the supplier is protected from the user changing their minds by the contract. A variation of this occurs when the software is embedded in a mass marketed device that is expected to where hundreds of thousands of items will be shipped in the first week of production. The cost of a recall or handling thousands of returns due to a major software defect may well make the producers insist on a zero-defect strategy for the software in the product. [ Roojimans et al 96 ]

      [Figure]

      On the other hand it is now being argued, in ACM StandardView, that the dominant software product does not need any extra protection by standards or intellectual property rights. It already has several powerful advantages that are not present in more tangible products. See items in bibliography under [ Economic Pressures ]

      Complexity

      Notice that in the case when there are many suppliers and many customers we have a recipe for a highly competitive and dynamic environment like those studied by evolutionary biologists[For example: Dawkins, Wilson, ...] and students of complexity such as W Ross Ashby, Gordon Pask, Brian Arthur, and the Santa Fe Institute. Each supplier has a choice of strategies in the game theoretical sense. Those strategies that lead to short term success tend to be copied - they are, in Dawkins's language Memes. The chances of making a profit and surviving depend on both one's own strategy and the mix of strategies competing with you. Behavior patterns emerge as a mixes of strategies across the population of suppliers. Such systems have the following properties: (1)A random pattern tends towards one of a smaller number of attractors. (2) There can be many such attractors. (3) the particular attractor is not always predictable, and within that attractor the behavior can appear random (chaotic). So, (4) certain strategies or cycles of strategic mixes tend to lock-in. (5) A locked-in pattern can break down and change to another one - but this is an unlikely event depending on either a set of cooperative mutations or some change in the environment. (6) The system survives a change in the environment by a shift to a new (and not always predictable) mix of strategies.

      The strategies adopted involve selecting not just the functionality of the product but its overall qualities. In other words an early decision is made about what are the most important things: time to delivery, number of errors permitted at delivery, readability of code, modularity, size of team, RAM, CPU time, disk latency, etc.

      Given this background the development of software markets makes some sense: market leaders, shifting paradigms, and so on. However there is little empirical data, as yet, to make such a model directly useful for the manager or the researcher.

      Future Research

      We need to survey a representative sample of software suppliers to see if the factors mentioned here have the effects suggested, and the number of suppliers in each of the 8 cells in the final model.

      It would be good to nail the curves to the wall... get actual figures and see how they fit. Doing this will be an expensive proposition. It should result in more quantitive and predictive models. They may even be useful.

      It would be interesting to work at and simulate a complex software market with many suppliers along the lines of the "Prisoner's Dilemma" competitions (Dewdney?Hofstadter?]. This plus some theoretical work might give a line on the stable strategy mixes that can exist and the circumstances in which they occur.

      Because a piece of software has many qualities, it is clear that there are many different desirable qualities. We can not, therefore, mandate a method on the basis that it improves "software quality". We instead need to develop knowledge how different technologies and methodologies effect the different qualities. This can be the basis of selecting and fitting methods to particular clients and situations.

      Conclusions

      Researchers and consultants in software engineering there is already a clear implication. We can not succeed by selling "one size fits all" - even if it is first to the market! We need to relearn a very old lesson: you can not tell what to do without knowing what the situation really is. We need to develop or study analytical techniques that can be applied to complex systems that involve human beings and computers.

      Perhaps researchers and consultants in software engineering need to be trained in systems analysis, economics, and complex systems!

      Related Work

      Most of the ideas in this paper have been published before - see bibliography. In particular the powerful pressure towards low-cost rapid development is well known. However I am not aware of any attempts to plot supply/demand curves for software before this one. Further I believe that this first paper to point out in detail how the penalties of hasty development account for the observation that the original product does not remain dominant forever, but has been observed to vanish. The idea that the theory of complex adaptive systems can be applied to software markets is, as far as I know, original.

      To their credit the SEI Capability Maturity Model and several other "Process Improvement" consultants do not stress the adoption of a particular life cycle or process. [ Keuffel 95c ] They do however insist that what is being done has to be documented and monitored before their is any hope of improvement. Neither do the (usually) assert that there is a single measure of the quality of piece of software or the process that produces it. What is important is that an organization has worked out a way of tracking the properties that it finds to be most important.

      Bibliography

        History


        (Brooks 95): Frederick P Brooks Jr, The Mythical Man-Month: Anniversary Edition, Addison-Wesley Publishing Co Inc 1995

        Split between research and practice


        (Parnas 95): David Lorge Parnas, On ICSE's "Most Influential" Papers, Speech at ICSE'17(95) & ACM SIGSOFT Software Engineering Notes V20n3(Jul 95)pp29-32 --

        Chaos


        (Bach 9?): James Bach, Process Evolution in a Mad World, Borland International Internal Report (100 Borland Way Scots Valley CA 96066), [ Bach 95 ]


        (Bach 95): James Bach, Enough about process: What we need are heroes, IEEE Software Magazine V12n2(Mar 95)pp96-98+replies IEEE Software Magazine V12n3(May 95)p5-7


        (Olsen D 93): David Olsen, Exploiting Chaos: Cashing in on the Realities of Software Development,Van Nostrand Reinhold NY NY, ISBN 76.76 D47048 92 ISBN 0-442-011124

        Production under Pressure


        (Boddie 87): John Boddie, Crunch Mode, Yourdon Press/Prentice Hall NJ 1987


        (Olsen 93): Neil C Olsen, The Software Rush Hour, IEEE Software Magazine V10n5(Sep 93)pp29-37; correspondence Robert L Glass V11n1(Jan 94)pp6&7


        (Olsen 94): Neil C Olsen, Survival of the Fastest: Improving Service Velocity, IEEE Software Magazine V12n5(Sep 95)pp28-38


        (McConnell 96b): Steve McConnell, Software Quality at Top Speed, Software Development Magazine (Aug 96)pp39-42

        Economic Pressures


        (Davis R et al 96): Randall Davis & Pamela Samuelson & Mitchell Kapor & Jerome Reichman, A new View of Intellectual Property and Software, Commun ACM V39n3(mar 96)pp21-30


        (Farrell 95): Joseph Farrell, Arguments for Weaker Intellectual Property Protection in Network Industries, StandardView V3n2(Jun 95)pp46-49


        (Leveson 94): Nancy G Leveson<leveson@cs.washington.edu>, High-Pressure Steam Engines and Computer Software, IEEE Software Magazine V27n10(Oct 94)pp65-73


        (Roojimans et al 96): Jan Roojimans & Hans Aerts & Michiel van Genuchten, Software Quality in Consumer Electronic Products, IEEE Software Magazine V13n1(Jan 96)pp55-71.


        (Shurmer & Lea 95): Mark Shurmer & Gary Lea, Telecommunications Standards and Intellectual Property Rights: A Fundamental Dilemma?, StandardView V3n2(Jun 95)pp50-59


        (Smoot 95): Oliver R Smoot, Tension and Synergism Between Standards and Intellectual Property, StandardView V3n2(Jun 95)pp60-67


        (Warren-Boulton et al 95): Frederick R Warren-Boulton & Kenneth C Baseman & Glenn A Woroch, Economics of Intellectual Property Protection for Software: The Proper Role for Copyright, StandardView V3n2(Jun 95)pp68-78

        MicroSoft's Process


        (Cusumano & Selby 95): Michael Cusumano & Rick Selby, Microsoft: Rethinking the Process of Software Development, Simon & Schuster September 1995,


        (Keuffel 95c): Warren Keuffel, People-based Processes: a RADical Concept, Software Development Magazine(Nov 95)pp31+32+35+36..37 +letter Jan 96 p13
        (Maguire 94): Steve Maguire, Debugging the Development Process, Microsoft Press Redmond 1994, read [ Zachary 93 ] as an antedote.


        (McConnell 93): Steve McConnell, Code Complete: A Practical Handbook of Software Construction, MicroSoft Press Redmond WA 1993, ISBN 1-55615-484-4 BNB92-41059 QA76.6 D3 M39 1993


        (Pascal 94): G Zachary Pascal/G Pascal Zachary??, Showstopper! The Breakneck Race to Create Windows NT and the Next Generation at Microsoft, Free Press 1994 (ref from Keuffel 95b Software Development Magazine(Aug 95)), [ Zachary 93 ]


        (Zachary 93): G Pascal Zachary, Climbing the Peak: Agony and Ecstacy of 200 Code Writers Beget Windows NT, Wall Street Journal (May 26 93), review mentions it p106 IEEE Software magazine Sep 95


        (Sanders 95a): James L Sanders, Microsoft Releases a Work in Progress, IEEE Software Magazine V12n5(Sep 95)pp90-91

        Types of Software Production


        (Bond 95): David Bond, Project-Level Archetypes, Software Development Magazine(Jul 95)pp39-44+letter (Oct 95)p11


        (Card DN 95): David N Card<dnc@sps.con>, Is Timing Really Everything?(Guest Editors Introduction), IEEE Software Magazine V12n5(Sep 95)pp19-22


        (Glass & Vessey 95): Robert L Glass & Iris Vessey, Contemporary application Domain Taxonomies, IEEE Software Magazine V12n4(July 95)pp63-76


        (Lehman 78): Manny M Lehman(Imperial College London UK), Laws of Program Evolution - Rules and Tools for Programming Managamnet, Proc Infotech State of the Art Conf "Why Software Projects Fail" (Apr 9-11 1978)pp11/1-25 UK
        (Lehman 95): Manny M Lehman(Imperial College London UK), FEAST - Feedback& Evolution& Software Technology, Sofware Process Newsletter n3(Spring 1995)pp3-6 IEEE CS TCSE Committee on Software Process

        Costs of Software Development


        (Banker & Kemerer 89): Rajiv D Banker & Srikani M Datar & Chris F Kemerer, Scale Economies in New Software Development, IEEE Trans Soft Eng VSE-15n10(Oct 89)pp1199-1205


        (Boehm 80b): Barry Boehm, Software Engineering Economics, Prentice Hall International 1980

      . . . . . . . . . ( end of section Bibliography) <<Contents | End>>

    . . . . . . . . . ( end of section Can One Size Fit All?) <<Contents | End>>

End