[Skip Navigation] [CSUSB] / [CNS] / [CSE] / [R J Botting] / [Samples] / design13
[Index] [Contents] [Source Text] [About] [Notation] [Copyright] [Comment/Contact] [Search ]
Fri Feb 15 15:15:52 PST 2013

Contents


    A Theory of Design

      Warning

      This is a PBI: Partly Baked Idea, or work in progress.

      The previous iteration is in [ design99.html ] (1999).

      Theory

    1. DESIGN::=
      Net{
        Design has to start from something. Here the starting point is described by a mnemonic acronym:
      1. |-PQRST.

        This is short for Purposes, Qualities, Realities, Technical things, and Systems.

        Here is a formal description:

      2. PQRST::=following
        Net
        1. S::=Sets, Universe of systems.

        2. P::@S=The set of Systems that do everything the client wants.

        3. Q::Sets,set of qualities, including cost, delivery time, reliability, etc,...

        4. R::@S=the Set of Systems with objects modeling the relevant part of the real world. This is one of the defining properties of a software system. Software systems can directly reflect the user's or client's domain. Indeed, most programming methods aim to make this happen in some way or other. Examples include systems that contain a data base that models the client's concepts, systems that process data according to the client's language, and systems that handle possible and/or significant dynamic patterns of events correctly.

        5. T:: @(@S>->@S), Techniques, tools, technology, teamwork leading to changes in systems

          Normally we have an existing system to start from:

        6. normal::@, indicates we have a system that already exists. If normal, s0::S=Current system.

        (End of Net)

        Design presumes that the PQRST are already known well enough to proceed. However, in the process of design any of PQRST may change. It is useful therefore to maintain a trace of the connection between conclusions and the PQRST that lead to them [Dasgupta91] [Gravell90]. The USA Department of Defense mandates this kind of traceability.

        Notice that (in theory) P can be expressed as the intersection of a number of sets of systems meeting partial requirements --

      3. (1): P=&Π where
      4. Π::@@S=Set of partial purposes (use-cases or functions or stories).

        Normally the current system s0 satisfies some of the purposes of the system, but not all of them:

      5. (2): For some π :@Π ( s0 in &π ~ P ). Indeed for any system s:S, we can define
      6. π(s)::={ p:Π || s in p }.

        The current system (s0) may also fail to have good enough qualities (Q).

        The Qualities (Q) express the non-functional requirements on the system. Typically they are expressed in terms of "ilities" and then need refinement into metrics and tests that measure the quality of the system. Also note that a system may have many significant qualities -- speed, cost, reliability, and security -- for example.

        Notice that since this is a model of systems rather than a model of designs, specifications, and implementations it is not natural to make a model of the relation (1st) implements (2nd). Instead the set of systems that are in P and/or R may be described by a separate universe of documentation and the (1st) implements (2nd) connects the systems with their documentation. Thus a hierarchy of refinements may or may not exist between documentation, but not between systems. Loe Freijs has presented an interesting theory of this relation and the structure of designs [Freijs93].

        Notice that much of the published information on software development is about the Technical factors (T). These define the limitations on what the software development team can do to produce a better system. Here are some of the limitations that are subsumed under the T factors:
        Net

        1. (resources): tools, environments, languages, training, people, work places, ...
        2. (rules): standards, architectures, methods, scripts, processes, procedures, ...

        (End of Net)

        My focus here is on the essence of the engineering ethic: making something more effective.

        Properties of the T's


        1. (above)|- (3): For t:T, t in @S->@S -- map sets of systems into new possibilities, with some unpredictability.

          So if t:T, then t(s0) describes the possible outcomes of applying the technique of technology to the initial system.

          Applying a series of techniques

          And if t1 and t2 are two T techniques then
        2. t2(t1(s0)) is the result of applying t1 and then t2. So
        3. t2 o t1 is the combined technology. This can also be written
        4. t1; t2 by the way. In more mathematical terms we have a Semi-group of techniques.

          The Do-Nothing Option

          One T is doing nothing(Id(@S)). The T's available to us are a subset of all possible changes that might be made, and if we can carry out two or more changes then any sequence of them is also a valid change.

          So we can assume that we have a Monoid (definition linked below) of T's

        5. |- (6): (monoid)(T, o, Id(@S))==>(@S, o, Id(@S)).
        6. (6)|-Id(@S) in T.
        7. (6)|-for t1, t2:T then t1; t2 in T.

          Monotonicity.


        8. (above)|- (7): For all t:T, A,B:@S, t(A) | t(B)==>t(A | B).
        9. (above)|- (8): For all t:T, A,B:@S, if A==>B then t(A)==>t(B).

          By Knaster-Tarski Theorem [Knaster29]

        10. (above)|- (9): For all t:T, some F:@S, t(F)=F.

          Dangerous Techniques

          Certain operations annihilate the system that they operate on, in the sense that there are maps in m:@S->@S such that for a given A:@S, m(A)={}. These are not useful techniques and so we suppose these are not part of those we might apply,
        11. |- (10): For all t:T, not {} in post(t).

          Abbreviation

          Define an abbreviation for the special case of applying a change to a single system rather than a set of them:
        12. t::S>->@S=map[s:S](t({s})).

          Effect of T on P

          A technique can change the purposes that a system satisfies: Fot t:T, τ(t)::=rel[ π:@Π, π':@Π] (for all s:π (t(s) in π').

        . . . . . . . . . ( end of section Properties of the T's) <<Contents | End>>

        Theory of Values

          The values are additive and associated with the qualities[Von Neuman & Morgenstein].

        1. v::(S|T)>->(Q>->Real)=value/costs of various systems and changes.

          Assume that costs and qualities accumulate as successive changes are made.

        2. linear::@= v in (monoid)(T,o,Id(@S))->(Q>->Real,+,0).

        3. V::=set of satisfactory values::@(Q>->Real).
        4. f::=objective function::(Q>->Real)>->Real. [Operations Research]

          A triple in S><T><S can model a particular change to the system, and can have a value associated with it by:

        5. For s1:S, t:T, s2:t(s1), v(s1,t,s2)::Q>->Real=v(s2)+v(t) - v(s1).

        6. conflict_free::@,
        7. |- (11): conflict_free=( if f(x)>f(y) then for all q, x[q] >= y[q] ).

        8. ??{Arrow's theorem implies real projects will have conflicts}?

        9. ??{model of decision making see Cardenas-Garcia & Zelkowitz 91.}?

          Now, for example if cost is the critical value then we might have s1 better than s2 if the cost is less(<). So, for any relation on the Real numbers, we have an induced relation between systems. For example '<' is defined on the Reals, so < mod f is holds between systems s1 and s2 whenever f(v((s1))<f(v(s2)):

        10. For ρ:@(Real,Real), ρ::@(S,S)=ρ mod (v;f).


        11. (above)|- (12): For s1,s2:S, ρ:@(Real,Real), s1 ρ s2 = (s1.v;f) ρ (s2.v;f).

        12. G@S::=Global optima::={s:S || for all s1:S( s >= s1)}.

          The Traveling Salesperson Problem demonstrates that finding an optimal design is an NP-complete problem. The Busy Beaver problem for Turing Machines is an optimization problem as well (longest time), so it is clear that some Optimization problems are unsolvable.


        13. (above)|- (13): G={s:S || for all s1:S(f(v(s))>=f(v(s1)))}

          If we can treat S as a topological space with

        14. near::S->@@S, near(x)::=The neighborhoods of x,

        15. L::=Local_optima,
        16. Local_optima::@S={s:S || for some N:near(s), for all n:N (s>=n)}.

          Simon's Satisficing heuristic - good enough now, is better than the best never [Simon69] and [Dasgupta91].

        17. F::=feasible systems::@S.


        18. |- (14): F==>&{R, P, &[ q:Q]{s1 || t:T(s1 in t(s0) & v(s0,t,s1)(q) in V(q))} }.

          Iterative Improvement or Evolution

        19. For s1:S,p:PerCent, safe_steps(s1,p)::={ t:T || for (p%) s2 in t(s)( s2 in P & R & s1./<) }

          Equilibrium Solutions

          A solution is stable with respect to a given quality if all changes can make it worse.

        20. stable_wrt(q)::= { s0:S || for some t:T, s1: t(s) ( v(s0,t,s1)(q) < 0 )
          }

        By analogy with Pareto's economic analysis there is probably a system from which all changes decrease some quality:

      7. equilibria::= &[q:Q] stable_wrt(q).

        However some equilibria are not optimal.

        Another attempt to optimize complex systems leads to the paradoxical trap of sub-optimization( from Russell Ackov's work). In sub-optimization the problem is decomposed into parts and each part optimized separately. Although this is often successful there is no reason to expect the whole to be optimal as well. ...

      . . . . . . . . . ( end of section Theory of Values) <<Contents | End>>

      Structure

        The structure of design can be modeled as graph where each node is a component and the arcs connect parts to wholes. Each node has a type - indicating the form of decomposition. In software the following types of nodes give the simplest description of the product. Suppose N is decomposed into a list of n parts then one of the following can always be made to be true
        1. Sequence/Composition N=! n or N= ; n.
        2. Union N= | n
        3. Intersection N = & n
        4. Type N = Net{ n }

        Given that N satisfies P then we can often deduce the PQRs of the components (n).

        A design is structurally stable with respect to a factor if small changes in that factor leave the structure unchanged. Values of factors where this is not so are said to be on a 'catastrophe'[Thom, ...].

      1. ...

      . . . . . . . . . ( end of section Structure) <<Contents | End>>

      Proofs

      Proof of 8

      Proof of 9

      See the definition of a Monoid linked below.

      }=::DESIGN

    . . . . . . . . . ( end of section A Theory of Design) <<Contents | End>>

    Links

  1. monoid::=Monoid homomorphism.
  2. Monoid::= See http://cse.csusb.edu/dick/maths/math_33_Monoids.html.

( End of document ) <<Contents | Top