[CSUSB] >> [CompSci] >> [Dick Botting] >> [CS620 Course Materials] >> fixed
[Index] || [Contents] Tue Mar 25 12:07:52 PST 2003

Contents (index)


    Notes on the Fixed Point Theory of Denotations

    Introduction

    This is an introduction to Dana Scott's theory of complete partially ordered sets -- -- --(cpos)
    or domains. This is also know as fixed-point theory.

    A full theoretical treatment is elsewhere (DOMAINS, POSETS). The motivation for this theory is in the need to ground denotational_semantics in a sound piece of mathematics. Denotational semantics uses the "FIX" operator to avoid the recursive definitions found in operational semantics. A cpo or domain is a collection of objects where the FIX operator is guaranteed to produce a proper result.

    A fixed point is a value which does not change under some transformation. One simple example is the value 0 under the operation of multiplying by 2. If 0 is multiplied by 2 the result is 0. We say that 0 is a fixed point of the operation or function of multiplying by 2. A more advanced example is the the exponential function(exp) in the calculus. The exponential function is a function that does not change when differentiated:

  1. if y = exp(x) then dy/dx = exp(x).

    or

  2. if D is the operation of differentiation then D(exp)=exp.

    So exp is a fixed point under differentiation. Here are some physical examples, when you take a base ball and spin it round on some access the points on the axes are all fixed points. If you take a square tissue and lay it on a table, and then crumple it up an put it back where it was then there will be at least one point in the tissue that is vertically above it's old place. Crumpling is an operation that has fixed points!

    Quick exercise: When you look up at night and watch the northern sky for several hours, you will see a star that doesn't appear to move, what is it called?

    For any operator F:X->X will define

  3. fixed( F )::={ f:X || F(f) = f }).

    If you are still not sure of what a fixed point is, see [ Notes on familiar fixed points ] below.

    Theory of Fixed Points

    Most books presents the Scott/Plotkin theory that starts from scratch by defining axioms and notations for objects in a Domain or cpo(Complete Partial Order).

    I prefer to assume the whole of Set Theory and Discrete Math and build on that.

    Basic Model

    We are interested in operations that, when they terminate, change the State of our computer in a predictable way. So we are interested in a set of Partial Functions on our state. I will symbolize these Fun for short:
  4. Fun::@ (State<>->State). Now we can identify each f:Fun with the set of pairs (x, f(x)) for all x for which f(x) is defined:
  5. f = { (x, f(x)) || x:State, f(x) is defined) |.

    I use (==>) for the "subset or equal" relation between sets and so for f, g : Fun, f==>g means that g is defined where ever f is and has the same value. So g is just like f but more defined. It carries more information than f.

    Further the axioms for sets allow us to derive the properties of f==>g that most books assume:

  6. less_defined_than::=

    1. (set_theory) |- (ldef1): if f ==> g and f(s) is defined then so is g(s).
    2. (set_theory) |- (ldef2): if f ==> g and g(s) is not defined then f(s) is not defined as well.
    3. (set_theory) |- (ldef3): if f ==> g and f(s) is not defined then g(s) may or may not be defined.
    4. (set_theory) |- (ldef4): if f ==> g and f(s) is defined then f(s)= g(s).

  7. . Poset Theory Now standard set theory (set_theory) guarantees us that (==>) is reflexive, transitive and anti-symmetric, and so (by definition of POSET) (==>) is a partial order:
  8. (above) |- (set_poset): (Fun, ==>) in Poset.

    Further we can assume that Fun contains the completely undefined function -- the worst possible computation that never produces any results:

  9. {} in Fun
  10. (set_theory">set_theory) |- {} = { (x,f(x)) || false }. We will use this as the starting point when working out approximations to the function defined by a recursive definition. In textbooks it is symbolized by an upside down 'T' symbol called the bottom of the Poset because it is less than all the others:
  11. For all f:Fun, {} ==> f. people who use the \TeX mathematical notation and typesetting program write this as \bot.

    The standard theory of partial orders(POSETS) defines the least upper bound (lub) operator in terms of the ordering as the least of all elements that are above a set:

  12. (POSET">POSET) |-
  13. For S:@Fun, lub( S )::= the least upper_bound(S).

    We are interested in a general model of what a happens when a definition is applied many times. So we plan to demonstrate that there are certain kinds of operations on functions that generate increasingly better approximations to a particular function. The operation, in a sense converges to a unique function. We can also show that this unique function is always a fixed point of the operation. We will therefore be interested in the properties of operators on functions. For historical reasons these are called functionals. We will show that some of these functionals, when repeatedly applied to a function generate a series that tends toward a definite limit and that this is a particular fixed point of the functional. It turns out that the functionals that turn up in denotational_semantics have this property.

    Functionals

    To be more precise, suppose we are interested in an operations F:Fun->Fun. Let
  14. Funfun::=Fun->Fun.

    In particular we will consider the Kleene series generated by F:Funfun.

  15. Kleene(F)::= { {}, F({}), F F ({}), F F F( {} ), .... }

    For some of these operations F, we will show that the limit is the least fixed point of F:

  16. the lub(Kleene(F)) = the least fixed(F).

    There are some special kinds of functional (operation on function). One of these is are the strict functionals that don't defined the undefined:

  17. strict::@Funfun,
  18. |-For all F:strict, F({}) = {}. Another set of functionals are those that do not change the ordering of functions to which they are applied. These are said to be monotonic:
  19. monotonic::@Funfun,
  20. |-For all F:Monotonic, f, g:Fun, if f==>g then F(f) ==> F(g).

    An very important class of operation/functional never removes a defined value from a function to which it is applied:

  21. For all f:Fun, f ==> F(f)

    So when we have an operation on partial functions we will be most interested in will be the least fixed points and so define

  22. FIX(F)::= the least fixed(F).

    We will think of the above as being the limit of the series of functions generated by F.

    We will need functionals that are analogous to the continuous functions of the calculus. As their argument tends towards a limit, so does their result:

  23. F:continuous iff for all f:S,

    Standard theory doesn't guarantee the uniqueness of the least upper_bound and it does not guarantee that it is inside the original set!

    The problem is that by ??exercise 21??, Fun has subsets with no upper bounds and so no least upper bound inside Fun. We can show that upper bounds exist but they may not be functions -- they may be many-to-many rather than many-to-one.

    However we can show that ??Exercises 18 and 19?? the set of all relations on State><State is complete: Every set of relations has an upper bound -- the union of the relations.

    We need to show that for the kind of operation F that we use in denotational_semantics that the Keleene series, Kleene(F), is a set that has an upper bound in Fun. For the functionals in denotational semantics it turns out that this set is always what is called a chain.

    A chain is a set of functions S so that for any pair f, g: S, either f==>g or g==>f.

  24. chain::={ S || for all f, g: S ( f==>g or g==>f )}.

    Note. Perhaps you could think of these as being like sequences in the theory of the convergence in the real numbers.

    Some use a more general set, rather than a chain they work with directed sets. These are sets such that their every finite subset contians its lub.

    Lemma25

    Further we can show that chains in State<>->State always have an upper bound and so a unique least upper bound. Indeed it is the relational upper bound any way. So what we have done is discover that chains maintain functionality or many-oneness in the limit.

    They therefore turn chains into chains.

    ??Several nice exercises.??

    What we want to show is that the expressions used in the definition of while will always give us functions that are well defined. In other words we'd like to show that any expression made of cond, o, Id, etc will have a lest fixed point that is a partial fucntion.

    We can show them to be monotonic.

    Bad news: A monotone function does not carry least elements into least elements.

    All we know is that the least upper bound of the image of a set, under a monotone functions is ==> than the image of the least upper bound of the set.

    Lemma30

    Continuous Functional

    Just like in the calculus, a continuous functional turns limits into limits.

    Key step - The Kleene Sequence of a continuous functional has a unique least upper bound...

    Theorem 37

    The general result has some powerful consequences... like the existence of LISP like data structures!

    . . . . . . . . . ( end of section Theory of Fixed Points) <<Contents | Index>>

    Existence of Direct Semantics for While

    Now we have the theory needed to show that the direct semantics defines a function for each valid program. The function when applied to a statement returns a partial function - and so some (all? none?) of its values may not be defined.

    The heart of the problem is the definition

  25. D \[while b do S \] = FIX(F)
  26. where F:(State<>->State) -> (State<>->State)
  27. and F(g)= cond(B \[b \], g o D \[S \], Id). FIX(F) is defined whenever F is continuous. Now we can write
  28. F= F1 o F2
  29. where
  30. F1 = cond(B \[b \], (_), id) = fun[ g ](cond(B \[b \],g, Id),
  31. F2 = (_) o D \[S \] = fun[ g ]( g o D \[S \] ),

    Extra Exercise

    Verify the above by algebra if it is not obvious to you.

    We know that if F1 and F2 are continuous then so will F.

    So we need two lemmas:

  32. for all p:State->Boolean, g:State<>->State, cond(p,(_),g) is continuous.
  33. for all g:State<>->State, ((_) o g) is continuous.

    In fact we can go further and demonstrate that functions constructed by applying cond and composition are always continuous.

    Lemma 43


  34. |- (lemma43): for all p:State->Boolean, g:State<>->State, cond(p,(_),g) is continuous.

    Exercise 44


  35. |- (lemma44): for all p:State->Boolean, g:State<>->State, cond(p, g, (_)) is continuous.
  36. You can prove this two ways... (1) just like lemma 43 above (good practice) (2) by using lemma 43 in a clever way(quicker, but less educational).

    Lemma 45


  37. |- (lemma45): for all g:State<>->State, ((_) o g) is continuous.

    Exercise 46


  38. |- (lemma46): for all g:State<>->State, (g o (_) ) is continuous.

    Proposition 47

  39. D \[(_)\] in Stm->(State<>->State).
  40. Proof by structural induction of definition of D

    Summary

    The denotational semantics is in terms of the fixed points of continuous functionals. These have unique least fixed points -- precisely those that their Kleene sequences converge towards.

    . . . . . . . . . ( end of section Existence of Direct Semantics for While) <<Contents | Index>>

    Footnotes

    Partial Functions

    To make the presentation simpler if f:S<>->S we write
  41. f s is undefined iff s is not mapped into anything by f.
  42. f s is defined iff s is mapped into something by f.
  43. f s=s' iff f maps s into s'.

    Normally we say that

  44. f = g iff for all s, f(s) is defined iff g(s) is defined and f(s)=g(s).

    I will write

  45. f ==> g iff for all s, if f(s) is defined then so is g(s) and f(s)=g(s). (This fits with the using ==> to indicate subsets and so subrelations - my MATHS notation encourages this kind of overloading/polymorphism. In mathematical texts you will see a kind of square subset or equal symbol)

    However we do not have any justification(yet) for introducing a pseudo-element undefined into the set of States yet. This justification is developed by using the theory of partially order sets and relations. [ ORDER in math_21_Order ]

    The three ways of operating on partial functions are composition, conditional and fixed point functions:
    Net

    1. (composition of partial functions):
    2. if g(s) is defined and f(g(s)) is defined then (f o g) s = (f(g(s)).
    3. if g(s) is defined and f(g(s)) is undefined then (f o g) s is undefined.
    4. if g(s) is undefined then (f o g) s is undefined.

    5. (Selection of partial functions):
    6. if B(s) is not defined then cond(B, f, g)s is undefined.
    7. if B(s) and f(s) is undefined then cond(B, f, g)s is undefined,
    8. if B(s) then cond(B, f, g)s = f(s),
    9. if not B(s) and g(s) is undefined then cond(B, f, g)s is undefined,
    10. if not B(s) then cond(B, f, g)s = g(s).

    11. (fixed points of partial functions):
    12. FIX F::= the least upper_bound { g || F(g)=g }

    (End of Net)
    This last has to be shown to be well-defined. To show that there is always a unique least upper bound for the functions we use in semantics requires Dana Scott's theory of Domains( Complete Partially Ordered Sets or DOMAINS).

    Theory of Partially Ordered Sets

    For ref, trans, antisym, and other properties of ordered see bounds, least upper bounds, ... see
  46. POSET::= See http://www.csci.csusb.edu/dick/maths/math_21_Order.html

    For more on least elements, uniqueness of least elements, upper

  47. MAXMIN::= See http://www.csci.csusb.edu/dick/maths/math_21_Order.html#MAXMIN

    For chains, chain completeness, monotonic functions, continuous functions, ... see

  48. DOMAINS::= See http://www.csci.csusb.edu/dick/maths/math_24_Domains.html

    Proof of lemma43


    Let
    1. (43.1): p:State->Boolean,
    2. (43.2): g:State<>->State,
    3. F:= cond(p, (_), g).
    4. (def) |- (43.3): F is continuous
    5. iff F is monotone and for all non-empty chains Y(F o lub(Y)=lub(F(Y))).
    6. |- (43.4): F is monotone.
    7. |- (43.5): for all non-empty chains Y(F o lub(Y)= lub(F(Y))).

      Proof of 43.4


    8. (def) |- F is monotone
    9. iff for all g1,g2(if g1 ==>g2 then F(g1)==>F(g2) )
      Let
      1. g1,g2:State<>->State,
      2. (43.4.1): g1==>g2.
      3. (def) |- F(g1)==>F(g2) iff for all s, s' (if (F(g1))s=s' then (F(g2))s=s')


        Let

        1. s, s':State,
        2. (43.4.2): F(g1)s=s'.
        3. |-p(s) or not p(s).
          Let
          1. (43.4.4): p(s),
          2. |-s'
          3. (43.4.2)
          4. =(F(g1))s
          5. (def cond)
          6. = g1(s)
          7. (43.4.1)
          8. =g2(s)
          9. (43.4.4)
          10. =(F(g2)s.

          (End of Net)
          .Or
          Let
          1. not p(s)
          2. Similarly

          (Close Let )
        4. (F(g2)s)=s')

        (Close Let )

      (Close Let )

    (Close Let )

    . . . . . . . . . ( end of section Proof of 43.4) <<Contents | Index>>

    Proof of 43.5

  49. To prove: for all non-empty chains Y(F o lub(Y)= lub(F(Y))).
    Let
    1. Y:non_empty & chain.
    2. (def) |- F o lub(Y)= lub(F(Y))) iff F o lub(Y)<== lub(F(Y)) and F o lub(Y)==> lub(F(Y)).
    3. (Lemma30) |- F o lub(Y) <== lub(F(Y)).
    4. (Lemma25) |- (43.5.1): F o lub(Y) ==> lub(F(Y)).
    5. Straight forward set theory with two cases: p(s) vs not p(s).

    (Close Let )
    .Proof of lemma45
    Let
    1. g:State<>->State,
    2. prove ((_) o g) is continuous. In other words the function that maps function f into f o g is continuous.

    (Close Let )

    Notes on familiar fixed points

    Polaris or the "pole star" is close to the visual fixed point in the night sky.

    You are already familiar with the idea of a fixed point from normal arithmetic and algebra. A fixed point of a function f is any value of x such that f(x) = x. For example if
    Let

    1. f is the function that take an x and squares it


    2. |-f(0) = 0*0 =0
    3. |-f(1) = 1*1 = 1.

    4. Further for all other x you can show that either f(x)>x or f(x)<x.
      Let
      1. if x<0 then f(x)>0 > x.
      2. if x>0 and x<1 then f(x) = x * x < x*1 <x.
      3. if x>1 then f(x) = x*x > x*1 = x.

      (Close Let )


    5. |-FIX(f) = {0,1}.

    (Close Let )
  50. If you experiment a bit with functions of real numbers like this you'll conclude:
    1. Some functions have no fixed points.
    2. Some have two or more.

    You will also notice that in many cases, when you repeat the function to generate a sequence and the sequence converges, then is converges towards a fixed point.

    . . . . . . . . . ( end of section Footnotes) <<Contents | Index>>

    Glossary

  51. homomorphism::=`A function between two algebraic or logical systems that preserves the structures of the systems`.

    References

  52. BOOLEAN_ALGEBRA::= See http://www.csci.csusb.edu/dick/maths/math_41_Two_Operators.html #BOOLEAN_ALGEBRA

  53. denotational_semantics::= See http://www.csci.csusb.edu/cs620/denotational.html.
  54. set_theory::= See http://www.csci.csusb.edu/dick/maths/math_??_??.html

    . . . . . . . . . ( end of section Notes on the Fixed Point Theory of Denotations) <<Contents | Index>>


Formulae and Definitions in Alphabetical Order