>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [papers] >> rjb95a.Relations.vs.Programs
[Index] [Contents] [Source] [Notation] [Copyright] [Comment] [Search ]
Wed Dec 20 09:12:28 PST 2006

This part of my site contains partly baked ideas (PBI). It has drafts for publications, essays, plus illustrations and visual aids for presentations and seminars.

Disclaimer. CSUSB and the CS Dept have no responsibility for the content of this page.

Copyright. Richard J. Botting ( Wed Dec 20 09:12:28 PST 2006 ). Permission is granted to quote and use this document as long as the source is acknowledged.

# Introduction

This paper shows how the venerable calculus of relations leads directly to a simple structured programming language as a special case. For a converse approach see [ rjb95a.semantics.html ] which presents the semantics of the While language in terms of relations.

The last section explains a crucial limitation on this theory. [ Limitations ] The resolution of these limitations is tackled in a later paper: [ rjb9x.Timed.Relations.html ]

# Basis

This work is based on no more than the simple logic and set theory covered in an undergraduate Discrete Mathematics class, see [ Logic in home ]

Plus the assumption that the same logic can be used to talk about things of all types - objects, sets, sets of sets, relations, sets of relations, and so on.

# Binary Relationships

A special kind of relationship is one with two components - equivalent to a set of 2-tuples. They turn up in all formal and natural settings:
1. x above y
2. x > y
3. x = y
4. x owns y
5. x is_the_mother_of y

The set of all relations between sets A and B is written @(A,B). A relation is typically defined by a formula like the following:

6. For x:A, y:B, x R y::@= W(x,y), where W(x,y) is a well formed formula that returns {true, false}. There is also a formula for R it self:
7. |-rel[x:A,y:B](x R y) = R.

We can also express relations as sets of pairs or as boolean functions depending on the context:

8. {(x, y):A><B || x R y },
9. map[x:A,y:B](x R y),
10. (1st)R(2nd).

## Program & System Specifications

Relations behave like the operations and conditions in a programming language because each operation in a program(or an object!) defines a relationship between the current state and the possible future states. If there is always a single next state then the relation is deterministic. By composing deterministic steps carefully we get an algorithm. In turn an algorithm can be coded for input to a suitable translator giving an executable program. Alternately we can start from a nondeterministic relation that describes a set of acceptable behaviors and let the implementor selects a particular subset as an implementation [compare with Kowalski, Hoare 87].

A relation in R:@(T1,T2) can be defined by

11. For x:T1, y:T2, x R y::@= W(x,y).

Or equivalently by

12. For R::@(T1,T2)= W((1st), (2nd)).

The inverse /R is the converse relation to R. For example if (_^2) is the 'square' function then /(_^2) is the square root relation. Similarly the arc sine function is written /sin. Further we want /parent_of=child_of. I define:

13. For R:@(T1,T2), /R::@(T2,T1)= (2nd) R (1st).

Given two binary relations R and S then
R; S = The relation between (1st) and (2nd) such that for some z, (1st) R z S (2nd),

R | S = The relation between (1st) and (2nd) such that either (1st) R (2nd) or (1st) S (2nd),

R & S = The relation between (1st) and (2nd) such that both (1st) R (2nd) and (1st)S (2nd).

For R: @(T1,T2), S:@(T2,T3),

14. R;S::@(T1,T3)= rel[x:T1,y:T3](for some z:T2 (x R z and z S y)),
15. S o R::@(T1,T3)= R;S.

16. R::@(T1><T2)= {(x,y):T1><T2 || x R y }.

In context, a set can also be used as a relation:

17. For A:@T, A::@(T,T)=rel[x:T, y:T](x in A and x=y).

Complex situations can be defined quickly using these notations:

## Kinship Example 1

male, female::@being.
18. parent::@(being,being)=(1st)is a parent of(2nd).
19. ...
20. child_of::=/parent.
21. father::=male;parent.
22. mother::=female;parent.
23. son::=male;child_of.
24. daughter::=female;child_of.
25. grandfather::=father;parent.
26. ...
27. granddaughter::=daughter;child_of.
28. grandson::=son;child_of.
29. ...
30. sibling::= (/parent;parent) ~ (=). -- I am not my own sibling!
31. sister::=female;sibling.
32. ...
33. uncle::=brother;parent.
34. niece::=daughter;sibling.
35. first_cousin::=/parent;sibling;parent.
36. second_cousin::=/parent;/parent;sibling;parent;parent.

## Relations that change

Relationships change in real systems. Indeed most Z specifications contain scheme that describe the changes that can occur in a collection of sets and relations. For example, in a university above a student can add and drop sections so if L represents the current relation student (1st) is enrolled in section (2nd) and L' the future form of the relation:
37. For t:Students, n:Section, add(t,n)::=add student t to section n::= not(t L n) and L'=L | {(t,n)}.
38. For t:Students, n:Section, drop(t,n)::=drop student t from section n::= (t L n) and L'= (pre(L)~{t});L.

## Kinship Example 2

Relationships can also be defined in a EBNF like form - complete with recursion:
39. lineal_descendant::= child_of | lineal_descendant; child_of,
40. ancestor::=parent | parent; ancestor
41. cousin::=/parent;sibling;parent | /parent;cousin;parent,
42. first_cousin_once_removed::=/parent;first_cousin | first_cousin;parent.
43. [From Logic.Natural.KINSHIP]

## Trasitive Closures

The definitions of ancestor and lineal_descendant illustrate the transitive closure or ancestral relation of parent and child. The reflexive transitive closure is important and defined as follows:
44. For all Types T, R:@(T,T),
45. do(R)::@(T,T)=The reflexive transitive closure of R ::= The relation between x and y such that x=y, or x R y or x R;R y, or....

Example:

46. |-ancestor=parent; do(parent) and lineal_descendant=child_of; do(child_of).

The defining property of do is that if something is invariant (or fixed, or closed) under R then it will not be changed by any number of Rs [ Footnote1 ] :

47. For R:@(X,X), F:@X, if F in inv(R) then F.do(R)==>F.

Example:

48. |-families.do(parent)==>families.

Where

49. inv(R)::= the invariants of R::=the sets X such that if x in X and x R y then y in X
50. inv(R)::={F:@X || F.R==>F }.

Example:

51. families::=inv(parent).

We can also define the power of a relation [cf Antoy & Gannon 94] via the following equations:

1. R^0 = Id,
2. R^(n+1)=R^n;R,
3. R^(-n)=(/R)^n.

The above definitions are equivalent to the well known forms:
52. |-do(R) = Id | R | R^2 | R^3 | ... = |{ R^n || n:0.. },
53. |-do(R) = &{ Q || Id==>Q and R ==>Q and Q;Q==>Q }.

Similar to the idea of an invariant set of states is the idea of a stable function:

54. stable(R)::=The stable functions under R::={f || R; f =f }.

and, so if f in stable(R) then f in stable(do(R)).

Also for all R and f, f is stable under no(R). These ideas are a simple and intuitive way to handle mathematical induction, but have a solid logical base in Principia Mathmatica [Whitehead & Russell].

## Program Like Structures: if, while, etc

Relational expressions include structured programs because I define:
55. For R:@(S,S), no(R)::@(S,S)= ((1st)=(2nd) and for no y((1st) R y).
56. For A:@S, R:@(S,S), while(A)(R)::=(do(A; R); no(A)),
57. For I,J,K:@(S,S),R:@(S,S), for(I, J, K)(R)::=(I; while(J) ( (R); K ) ),
58. For A:@S, R,T:@(S,S), (if A then R else T fi)::=(A; R | no A; T).

I can define an extension of Dijkstra's Guarded Commands(below) and Ada-like control structures as well:

59. For A,B,C:@(S,S), loop A; exit when B; C; end loop::= (do ( A; no(B); C); (A; B) ).

I can also show that this calculus is as powerful - and indeed more powerful - than any logic programming language. But I'll leave that for a later time.

## Basic Operations

To make a usable system I need a way to specify simple changes to variables in a program. The key definition is that of a dynamic predicate. A predicate is a proposition that has free variables. A dynamic predicate has free variables with primes (') attached to them. Here is an example:
60. x'=x+y

After the statement, the value of x is equal to the sum of the old values of x and y. Also is unchanged. It could be coded as x:=x+y in Ada/Pascal/... The rule is that "a prime means the new and no prime means the old" [from Ross Ashby 56, compare Part II of Lam & Shankar 90, Hehner 84a & 84b, Guttag & Horning 91, Steward 93]. Formally x'=x+y defines a relation on a structured set with variables x and y. So

61. x'=1

guarantees that x is 1 afterwards and also that no other variables change.

62. x'=1-x

means that the value of x will be complemented - its new value will be the result of subtracting its old value from one while all other variables stay the same.

63. x=1

has no primed variable. It means that x has the value one, and does not change - all variables are left untouched. It is a relation that holds between identical states when and only when x has value 1. It filters out all states when x is not 1. It allows computation under a condition. We call it a condition therefore. It can be shown that it is a condition in the sense of Arbib and Manes [87].

Here is a more general statement of the MATHS interpretation of a predicate that contains variables with primes. Consider a simple state space S which declares m variables x1, x2,...x[m] of types X1, X2, ...X[M]:

64. S::=Net{x1::X1, x2::X2, ..., x[m]::X[m]}.

Let P(y1',y2',y3',...y[n]',x1,x2,x3,...x[m]) be a predicate with n distinct primed variables(y1, y2,...y[n]) which may also appear without primes as xs. For example in x'=x+y, x1=x, x2=y, and y1=x.

The relation defined by P(y1', y2', y3', ...y[n]', x1, x2, x3, ...x[m]) is found by:

65. (1) Replacing primed variables (y1,...) by values in the new state(s2), and
66. (2) Replacing non_primed variables(x1,...) by values in the old state(s1), and
67. (3) Adding the requirement that the static (unprimed) variables do not change. So:
68. P(y1',y2',y3',...y[n]',x1,x2,x3,...x[m])::= rel [s1:S, s2:S]( P(s2.y1, s2.y2, ..., s2.y[n], s1.x1, s1.x2, ...,s1.x[m] and for all i:1..m where x[i] not in {y1, y2, ..., y[n]}, s2.x[i]=s1.x[i]).

A more rigorous definition will be worked out in an appendix to this paper, [ Dynamic Predicates ] To specify a process that solves two simultaneous linear equations we write

69. a*x'+b*y'=c and d*x'+e*y'=f.

To specify a process that solves a quadratic equation we write

70. a*x'^2+b*x'+c=0.

On the other hand

71. a'*x^2+b'*x+c'=0,

specifies that x is left alone and a, b, and c are adjusted to fit the equation.

Dynamic predicates can be used with any type of variable. The Programming Research Group at Oxford University, England has shown how dynamics can be simply specified as changes to sets[Z]. Sets specify the possible data bases. This can be done in MATHS. Here is an example from the Elevator system(Chapter 2 section 4 MATHS in Software Engineering):

The state of the system includes these sets:
1. R::=Requested::@Button, S::=Serviced::@Button.
2. For Button b, request(b)::= (b in S and S'=S~{b} and R'=R | {b}).

## ADTs as Sets of Operations

To define an ADT you list declarations, conditions and actions (section 6.7 below). For example
72. COUNTER(x)::= Net{x:Nat, x'=x+1, x'=x-1, x'=0, x=0, x<>0 }.

Given a set of documentation that has only declarations and predicates we can define the operations described by the documentation as a set of relations:

73. operations::documentation -> |[U:Types]@(@(U,U)),
74. For W:@predicates, U:@declaration, Z=U|W, operations(Z)::@(@(U,U))= { Net{U, w} || w:W}.

So, for example, operations{ x::Nat , x'=x+1, x'=x-1, x'=0, x=0, x<>0} = { rel[u:X, v:X]( v.x=u.x+1 ), rel[u:X, v:X]( v.x=u.x-1 ), rel[u:X, v:X]( v.x=0 ), rel[u:X, v:X]( u.x=0 ), rel[u:X, v:X]( u.x<>0 ) }.

## Structures and Programs

Relations are a nondeterministic generalization of structured programming. I will now show that if-then-else, if-fi, while-do, do-od are special formulae in the calculus of relations.

## Definition of A Non-sequential Process

A Non-Sequential Process is an expression of the relational calculus using dynamic predicates as the elementary relations.

## Definition of a Structured Program

A structured program is a finite expression of the relational calculus using only composition, union, simple predicates, and dynamic predicates of the form
75. variable "'" "=" expression

as the elementary relations, and the expressions have no exceptions( are total).

## Example Program/Relation

76. NEWTON::= (y and e in Real & Positive; x'=1+(y/4); do ( |x^2-y|>=e; x'=(x^2+y)/(2*x) ); |x^2-y|<e).

## A formal presentation

Given a set of relations then we can use the expressions of the calculus of relations to express complex processes and/or simple programs. Given a set B of relations than SP(B) is the set of simple finite programs written using just the operations in B, and Programs(B) is the algebraic closure of B - the set of relations (R(i)) that satisfy a set of equations of the form R(i)=e(R) with e is an expression involving elements of B and the R(i).

The structured programs based on a set of operations are those relational expressions that are regular expressions made out of the operations in the documentation and their negations:

77. For Documentation D, SP(D)::= regular_expressions( operations(D) | { no(R) || R:operations(D) }. Hence
78. For Documentation D, Programs(D)::=Programs(operations(D)).

The processes derived from a set of declarations and predicates by using the operators (;), (|), (do), (no), (&), (~), etc are called the non-sequential processes:

79. For Documentation D, non_sequential_processes(D)::=relational_expression(operations(D)).

By well known arguments it is clear that

80. |-For all D, some Documentation E, Programs(D) = SP(E).

## Relations and SP

Other developments in the theory of structured programming(SP) are special cases of the relational calculus.

Examples include Jan de Bakker's use of the calculus of relations to supply the semantics for Pascal-like program schemata [De Bakker 75] and Hoare's Relational Semantics[Cousot's Theorem 19 (page 856 of Leeuwen 90)]. Hehner's recent book on programming with predicates is very similar.

I will check that my model is close to the Hoare/Dijkstra/Gries model of SP: Consider sequence, selection and iteration, and conditions in turn:

### Sequences

Hoare has independently stated axioms for the semicolon in SP that are those of semiring multiplication[Hoare 88]. Dijkstra's discussion of 'R;S' is equivalent to the my definition of R;S [Dijkstra 76]. Mili et al write:

### Quote

"[...], a specification that is structured as a relative product is a virtual order to the designer to structure his program - solution as a sequence of blocks, one for each term of the expression[...];" [page 118 of Mili Boudriga & Mili 89]

. . . . . . . . . ( end of section Quote) <<Contents | End>>

### Iterations

I define defines do(R) by using invariants. The SP 'while' rules [Hoare] and 'do-od' rules[ Dijkstra 76, Gries, Dijkstra (Ed) 90] are defined the same way. However SP assumes deterministic elements and some determinism in the structures, but the calculus of relations does not. It can shown that, if P is a condition then:
( do (P; R); no P) =do P->R od.

So I define while(P)(R)::=do P->R od::= ( do (P; R); no P).

The "terminating condition" ( no P ) is necessary.

### Selections

A selection (R | S) gives a choice of two alternatives. It is not an if-then-else [Knuth 74]. Instead (R | S) indicates a process that can follow one of two paths, but does not pre-determine which. But, if R and S are deterministic and P is any condition, the following does pre- determine the component to obey:
(P; R | no P;S) =if P then R else S fi= if P -> R [] ~P -> S fi.

### Conditions

Conditions form a Boolean Algebra[cf Manes & Arbib 86]. Unlike the Dijkstra/Gries/Hoare theory there are no special guards. If any item in a sequence is going to be inconsistent then an alternative is taken: ((a; no; c ) | b)= b. Such temporary non-determinism can be coded by either back-tracking or look-ahead - this is discussed further in chapter 6 of my unpublished monograph.

So if a "path" or "track" will fail then the computation never started. In the Dijkstra model, computations either succeed or abort. abort is defined to mean that any thing could happen. So I define 'abort' to be the universal relation.

1. abort::=rel[x,y](true). In Mills's model when a state occurs that is outside the domain of definition of a function, the process terminates[Mills 72, Mills 75, Mills et al 89, Zelkowitz 90].

I treat failure as something that the coder will make the program avoid by following an alternative path to be taken. In terms of Scott's theory of Domains, Dijkstra assumes that failure is the 'bottom' (least defined) element of the domain and I assume that failure is the 'top' (overdefined) element of the domain. Related to this difference is Dijkstra's "Law of the Excluded Miracle", Nelson has argued [Nelson 90, and earlier DEC Internal Reports] that this law makes the calculus more complex and less general. Two other, rival, approaches to the meaning of 'failure' have been recently reviewed [Sistla 90].

Several authors have come to contradictory conclusions about the relation between regular expressions and programming [Knuth 74, Harel 79, Sakarovitch 86, ...] . The calculus of regular expression on binary relations is more than a structured programming language - it has full "Angelic nondeterminism." It is a way to express designs that can be implemented with different space/time trade-offs [Examples: Martelli & Montanari 82, Bitner & Reingold 75, Kowalski 79, Survey: Cohen 79]. In Floyd's phrase relations are "non-deterministic, not in the sense of being random, but in the sense of having free will"[ Floyd 67b]. The implementor is left the task of coding "free will" efficiently without losing the correspondence between the problem and solution[ Jackson 75]. Intersections also lead to interesting - not to say intractable - coding problems.

. . . . . . . . . ( end of section Relations and SP) <<Contents | End>>

## Conclusion

Thus every program can be specified as a relation but not vice versa. Lamport's dialogue discusses the difference well:

81. "...why not write specifications in an ordinary programming language instead of devising specification languages?

"This can be done. However, programming languages are constrained by the requirement that programs must be compiled into reasonably efficient code. Because specifications do not have to compiled, specification languages can permit simpler specifications than can be written in programming languages." [p15 Lamport 86]

The calculus of relations using dynamic predicates contains all structured programming formulations so far published. It is not a programming language because the relations are a bridge from the environment to programs that fit it. Thus programs are a proper subset of relational expressions.

## Related work

### Simplification

It is easy to show that any structured program with out do can be reduced to a dynamic predicate in disjunctive normal form. It follows that simple parts of programs have a normal form. This reduction can be used in verifying, simplifying, and reverse engineering source code[compare Hausler et al 90, Mills et al 89, Zelkowitz 90].

### Example Simplification1

1. |-(x'=x+y; y'=x-y; x'=x-y) = (x'=x+y and y'=x+y-y); x'=x-y)
2. = (x'=x+y and y'=x); x'=x-y)
3. = ( y'=x and x'=x+y-x)
4. = ( y'=x and x'=y ).

### Example Simplification 2

5. Consider{
1. W::=( x=y; max'=x; min'=x | x<y; min'=x; max'=y | x>y; min'=y; max'=x).
2. |-W = ( x=y and max'=x and min'=y | x<y and max'=y and min'=x | y<x and max'=x and min'=y)
3. = ( min'=x=y=max' | min'=x<y=max' | min'=y<x=max'). |- pre(W)=P, P::=Net{ORDER. x,y,max,min::poset},
4. |-post(W)=P and {min,max}={x,y} and min<max.
}

### Normal Form

Finally, relational expressions follow the calculus of relations which can be used to simplify sequences into disjunctive normal form, like this:
6. R = A1 and A2 and A3 and... or B1 and B2 and... or...
7. |-(P(x, x') ; Q(x,x')) iff for some z (P(x,z) and Q(z,x'))=with{z}(P(x,z) & Q(z,x')).
8. |-(x'=1; y'=3; x'=x+1; y'=x+y ) iff ( x'=2; y'=3; y'=x+y) = ( x'=2; y'=5 ).
9. |-(x'=e(x,y); Q(x,y,y')) iff (x'=e(x,y) and Q(e(x,y),y,y')).
10. |-(P(x,y) ; Q(x,y,y',x')) iff (P(x,y) and Q(x,y,y',x')).

Typical simplifications are

11. |-R;(S|T) = (R;S) | (R;T).
12. |-(P(x, y); Q(x, y, y')) = (P(x, y) and Q(x, y, y')).
13. |-(Q(x, y); x'=e(x, y) ) = (x'=e(x, y) and Q(x , y)).
14. |-(x'=e(x, y); Q(x, y, y')) = (x'=e(x, y) and Q(e(x, y), y, y')).
15. |-(P(x, y,y') | Q(x, y,y')) = (P(x, y,y') or Q(x, y,y')).

Here are the key theorems need to prove the existence of the normal form.

16. |-(w1(x',y); w2(y',y,x)) iff for some u ( x'=u and w1(u,y) and w2(y',y,u) ).
17. |-(x'=e(x,y); w2(y',y,x))
18. iff for some u ( x'=u and u=e(x,y) and w2(y',y, u)).
19. iff for some u(u=e(x,y)) and x'=e(x,y) and w2(y',y, e).

20. |-if for all x,y, some u (u=e(x,y)) then( (x'=e(x,y); w2(y',y,x)) iff x'=e(x,y) and w2(y',y, e)).

21. |-(x'=e(x,y); w2(x,y))
22. iff (x'=e(x,y) and w2(e(x,y),y))
23. iff (w2(e(x,y),y) and x'=e(x,y)).
24. |-(w1|w2) iff (w1 or w2)

These theorems merely show that an intuitive symbolic execution of a relational expression will lead to a predicate describing the effect of following one or more paths through a program. Such expressions help discover properties, simplifications, and errors [Mills, Gries, Coen-Porisini et al 91, Khanna 91].

### Command Mathematics

The title of this section was coined by Prof. M L V Pitteway in the early 1970s to describe the study of algebraic manipulation of algorithms. Relations model commands well and the relational operators underly structured (and indeed unstructured) program. Further they can express mixtures of conditions and actions in one relation. A well known enigmatic program is often written:
25. Enigma::=
` (	while(n>1) (`
` 			if n in even then n'=n/2`
`			else n'=3*n+1`
`			fi`
`		)`
` ).`
Using relations lets us derive a suggestive new form:
26. |-Enigma=while(n>1)( 2*n'=n | 2*n'=3*n+1 ).

### Clean Room

Harlan Mills's internal IBM report[Mills 72] that shows that the theory of sets leads directly to a mathematical theory of structured programming which is both complete and provable. He corrects Boehm and Jaccopini's proof that there exists an equivalent program which is a tree with only the D'- structures(Ledgard and Marcotti). He shows that it is possible to prove the correctness of these programs by a tree walk. Finally he shows how top-down design can be reduced to choosing:
27. The kind of node ( iteration, selection, sequence)
28. One part of the new structure. For example he shows that once the first step in a sequence is chosen the specification of rest of the sequence is determined. Similarly he shows that given a function f and a set C then there are unique functions a and b such that f=(C+>a|b). The last paragraph states: "At this point, we leave it to the reader to observe that every construction and theorem goes through for indeterminate programs and their relations, just as for programs and their functions."

By inspection of Mills's report, this is true. However IBM's derived "Cleanroom" methodology uses functions.

### Program Derivation

Relations can make the derivation of programs more logical: (1) state the properties of the changed variables as a set of equations and then (2) solve for the primed variables. Similarly, a non-deterministic relationship can often be refined to a form that is compatible with a declarative programming system such as Prolog [Compare Lamport 87].

. . . . . . . . . ( end of section Related work) <<Contents | End>>

. . . . . . . . . ( end of section Binary Relationships) <<Contents | End>>

# Limitations

Having pursued the idea that the formulae in the Calculus of Relations is a nondeterminsitic structured program for 21 years it is sad to report that there has been published a simple observation that basically ruins the theory.

Parnas's work on the Darlington Reactor Shutdown Program in Canada uses a similar calculus to the one presented here[Parnas et al 95]. He points out that when a relation is nondeterministic then there is no way to tell from the relation whether it failed to produce a result, or whether the program has entered an infinite loop. An extreme example is that:

1. while(skip)(skip) = (do(skip);no(skip))=skip;fail=fail

Since this is an important difference it is clear that the calculus of relations by itself ignores an inportant difference and so is not an adequate model for programming.

The kind of case Parnas et al describe would be written

2. Loopy::= ( x = 1 | do(x <> 1)();x=1 ).

Now, Consider the intutive meaning of this program. It is clear that when x=1 this terminates, but when x<>1 it loops forever. However

3. |-Loopy = ( x=1 )

but this is a condition and has no loop in it.

Parnas et al's paper also discusses the problems that a program can have different performances but the associated relations will be equal. Parnas suggests that a program can be modelled not as relation, but as two relations: one gives what can happen, and the other indicates when the program will terminate safely under all conditions. For example

4. Safe= (x=1) if true before loopy gurantees that the loop will not occur.

Hehner has already proposed an alternative way of using the predicate calculus and relations as a model specifications and programs that avoid's the problem of infinite loops. It has the advantage that it handles termination, performance, and the analysis of algorithms by a uniform technique. In Hehner's model (Like some of Lamport's) a statement is a relation that always incudes an assignment of form

5. time' = time + delta, for some delta>0, Thus the loopy program above becomes:
6. LoopyTime::= ( x = 1 and time'=time+1 | do(x <> 1 and time'=time+1)();x=1 and time'=time=time+1 ). There is now a way to state that the program does not loop more than n times:
7. LoopyTime; time< n

However this does not get one out of the bind. If P=Q and T is another relation modelling a time constraint then

8. P:T = Q;T
9. T;P = T;Q
10. T&P = T&Q

So that is is still not possible to distinguish an infinite loop from failure bu using the calculus of relations alone.

There is clearly more work to be done on how to map programs into relations so as to capture useful properties. We can conclude that except in the most theoretical terms, relations are not programs! As Lamport has observed - Relations a specifications of behaviorial properties.

. . . . . . . . . ( end of section Limitations) <<Contents | End>>

# Appendices

## Structured State Spaces

Computer science needs a model of structured objects and data, etc.. The simplest structure is a set of numbered n-tuples - the Cartesian Product:
1. @(A,B,C,...)=@(A><B><C><...)={(a,b,c,...)|| a:A, b:B, c:C, ...} Modern formal notations from Ada to Z provide means for creating a new type of object by (1) listing components and (2)specifying constraints and operations on these components. I believe that any set of declarations can be interpreted as defining a set of labeled tuples. If D is a collection of declarations, definitions, axioms, theorems and comments then Net{D} is called a set of documentation. A dollar sign converts it into a set of tuples. Net{D}=\${D} is the set of objects that satisfy D. If Net{D} is called N then N means the same thing as Net{D}. For example: (Times, <) in LOSET.

### Labeled Tuples

Here are some examples of tuples, sets of tuples and sets of sets of tuples. You can think of these as records, tables and data bases in a programming system respectively. You can think of each as a small spreadsheet that automatically keeps itself up to date whenever any cell changes. You can also think of them as objects. Bear in mind that programs are finite implementations however. Formal definition is in chapter 5 of my monograph.

Tuples look like this: (a=>xa, b=>xb, ...). They are in a type defined by one of the following ways:

### Short forms

2. S::=\${ a::A, b::B, c::C, ... }, or S::=Net{ a::A, b::B, c::C, ... }, or N::=Net{ a::A, b::B, c::C, ... }, S::=N.

### Long definition

3. S::=N.
4. N::=
Net{
a::A,
1. b::B,
2. c::C,
3. ... }=::N

### Example Structures Set

4. phones::=\${name::Strings, number::Strings}.
5. |-(name=>"John Smith", number=>"714-123-5678") in phones.

Each label in a structure can be used to extract components from tuples with the dot operator of Ada, C, Pascal,..., and Z:

6. (name=>"John Smith", number=>"714-123-5678").name="John Smith"
7. (name=>"John Smith", number=>"714-123-5678").number="714-123-5678"

The dot fits with mathematics since '.' projects a vector into a scalar. MATHS formalizes projections as functions mapping structured objects to their components. It has both f(x) and x.f notations [Knuth 74]. So if x in X and X=\${a::A,b::B, c::C,... } then

8. a in X->A.
a(x)=x.a= the a of x.
9. b in X->B.
b(x)=x.b= the b of x.
10. c in X->C.
c(x)=x.c= the c of x.

Because projections are maps, they are relations and so have converses. So for example

11. y./a= { x || x.a=y }. MATHS also defines projection operations onto lists of elements:
12. x.(a,b) = (x.a, x.b) in A><B,
13. x.(a,b,c) = (x.a, x.b, x.c) in A><B><C.

One way to imagine a MATHS structure is to think of it as a record or object in a high level language or a labeled tuple in a RDBMS. The formal definition implies a collection of maps. Thus phones=Net{name,number::Strings} implies name in phones->String and number in phones->String. Formally, phones is the universal (free, initial, or universally repelling) object in the category of types with names and numbers. In other words: all types with two functions called name and number mapping objects into the Strings, is modeled by phones, because for any such type there is a function that maps phones into it without loosing the information about names and numbers. Therefore phones encodes all sets of objects with names and numbers(details in Chapter 5 of my monograph).

### Subclasses of a structured type

Given the idea of a tuple and a structured type it is easy to allow a notation for collections of tuple that satisfy a given constraint - for example the set of phones with prefix "011-" might be written as Net{name,number::String, number in "011-" String. } In my notation(MATHS): Net{a::A, b::B, ... z::Z, W(a,b,...z),...}= {(a=>u,b=>v,...):\${a::A, b::B, ... z::Z) || for some u:A, v:B,... (W(u, v,...)... ) }

.Dangerous_Bend

14. The following are not the same:
15. {x:A || W(x) } Set of objects in A satisfying W [Chapter 4].
16. {x in A, W(x) } Set with Boolean elements, either {true},{false}, or {true,false}. Net{x::A, W(x) } Documentation with variable x in set A and assertion W(x)[Chapter 5]. Net{x::A, W(x) } Set of objects (x=>a) where a is in A and W(a) is true.

### Sets of Objects

Given a set or class X in @T then we can talk about its subsets (@X). Finite sets of similarly structured elements are familiar to computer people as files, tables, and lists and are written finite@X. For example: Suppose phone_data::= finite@PHONE where PHONE::=Net{name::Names, number::Numbers, place::{home,work} },
17. Names::@Strings,
18. Numbers::@Strings.

So if

19. P={
20. (name=>"John Doe", number=>"714-123-5678", place=>home),
21. (name=>"John Doe", number=>"3678", place=>work),
22. (name=>"Joan Roe", number=>"714-123-5878", place=>home),
23. (name=>"Joan Roe", number=>"3878", place=>work)
24. } then
25. P in phone_data.

Further name(P) = P.name = the names in P ={"John Doe","Joan Roe"}, and ("John Doe",work)./(name,work).P.number="3678".

In general a property of the components defines a set of objects:

26. P(name="John Doe")={(name=>"John Doe", number=>"714-123-5678", place=>home),
27. (name=>"John Doe", number=>"3678", place=>work)}.

But a single object can often be identified by specifying enough properties:

28. |-the P(name="Joan Roe" and number="714-123-5878")=(name=>"Joan Roe", number=>"714-123-5878", place=>home), and
29. |-the P(name="John Doe" and place=home) =(name=>"John Doe", number=>"714-123-5678", place=>home).

Codd has shown that any set of data can be expressed as simple structured tables like the above. Thus we can model all data as tuples. A collection of sets of tuples is a relational data base and forms a category. The objects in an object oriented design also forms a similar category [Freedman 83]. Structures form a conceptual model of the area being studied that is readily implemented using modern technologies such as data bases, logic programming, and objects[Chapter 5 of my monograph].

### Functional Dependencies And Keys.

Notice that P above satisfies the requirement that we can look up a number. Given the items name and place they determine a unique number. We document this fact by writing:
30. P in finite@PHONE (name, place)->(number).
31. the P(name="John Doe" and place=work)=(name=>"John Doe", number=>"3678", place=>work)

We can define the phone books as those finite sets in @PHONE that relate a name and place with one number:

32. Phone_books::=finite@PHONE(name, place)->(number).

Identifiers or keys like (name, place) appear in all computer applications. Further Codd's work suggests that all data can and should be thought about and manipulated as a collection of such sets[DATA].

### Example - A University

I presented model of a university as a poster session to a joint SIAM/MAA Southern California Chapter meeting: [ UNIVERSITY in rjb9Xb.discrete ]

Standard technology and techniques can be used to implement sets and mappings - using (1)files, (2)a DBMS[DATA], or (3)data structures plus OOP[ OOP]. An alternative is to treat each entity and relationship as a functor or predicate in a logic programming system[Keller 87, PROLOG]. However classes can not be derived from the static analysis above. It should now be extended by a study of the dynamics of the entities and relationships so that object- oriented classes with both attributes and methods can be specified.

. . . . . . . . . ( end of section Structured State Spaces) <<Contents | End>>

## Dynamic Predicates

A Dynamic Predicate is a predicate that has one or more free variables that are primed:
1. LIN2::=(a*x'+b*y' = c and d*x'+e*y'=f). Many properties of predicates don't depend on the universe of discourse. Predicates do not explicitly determine the type of their free variables. They are polymorphic[FP], generic[ADA], or "typically ambiguous" [Russell and Whitehead 63]. This is also true of dynamic predicates - they are often manipulated independently of the types of their component variables. However the formal definition of dynamic predicates maps them into relations on a given set called the state space. The space is determined by the context [ Footnote2 ]

There is a special mechanism (See "with" below) for declaring "local" variables.

If D is the name of a simple piece of documentation then D is a record type, entity type, or class. \$(D) is a typical object. @D names the collection of all sets of objects of type D. Adding a predicate P to D specifies a subset of the records (D and P) ∈ @D. These predicates can specify validity condition for a data base, or an invariant property of an object, or a condition in a process or program. A dynamic predicate specifies a change in a record or state as a binary relation @(D,D). So dynamic predicates lead to a superset of structured programs. They can also define, specify, and reason about complex non-deterministic processes:

### Definition of Dynamic Predicates

Consider an well formed formula W which contains as free variables names of components of S with and without primes, where S has simple documentation U.

The primed variables are called dynamic variables and those with no primes are called static variables. Variables that are inU, but not in W are also static variables. An occurrence of a dynamic variable without a prime represents the prior value of the variable. An occurrence of a dynamic variable with a prime is a later value. A primed and unprimed variable have the same type. From W construct a predicate P(x, y, z) with three non- overlapping lists of free variables x[1..l], y[1..m], z[1..n] to replace the variables in W. The x[i] variables replace those occurrences of a name of a component with a prime in W - the new values. Each y[i] replaces an appearance of an x[i] without a prime in W - the old values. Thus each occurrence of a dynamic variable becomes either an occurrence of an x or an occurrence of a y. The formal meaning of W is given by \${U, W }::= {(u,v): U><U || u.z=v.z & P(v.x, u.x, v.z) }

### Example 1

Let U= Net{a,b,e::Real}, w=(|a'^2-b|<e) then x=(a), y=(b), z=(b,e),
2. P(x, y, z) = |x1^2- z1|< z2, and
3. Sqrt::=\$(U,w}.
4. |-Sqrt=\$(U,w}= {(u,v):U><U|| u.(b, e)=v.(b, e ) and|v.a^2-v.b|< v.e}
5. = {(u,v):U><U|| u.b=v.b and u.e=v.e and |v.a^2-v.b|< v.e}
6. |-For all a:Real( (a=>a, b=>2, e=>0.1) Sqrt (a=>1.14, b=>2, e=>0.1) )
7. but
8. |-For 0 a:Real ( (a=>a, b=>2, e=>0.1) Sqrt (a=>1.14, b=>3, e=>0.1)).

### Theorem 1

When U is simple then only variables with a prime can change.
9. |-\${U,W} ={(u,v): U><U || u.z=v.z } & {(u,v): U><U || P(v.x, u.x, v.z)}
10. ==> {(u,v): U><U || u.z=v.z } This means that each simple predicate must be implemented as an atomic action in the sense that it keeps static variables locked until complete.

### Example 2

If U = Net{x,y,z::T} and w = P(a,b), a predicate with a,b as free variables then:
11. \${U, P(x,y)}={((x=>a, y=>b, z=>c), (x=>a , y=>b , z=>c ) ) || P(a,b) },
12. \${U, P(x',y)}={((x=>a, y=>b, z=>c), (x=>a', y=>b, =>c ) ) || P(a',b)},
13. \${U, P(x',y')}={((x=>a, y=>b, z=>c), (x=>a', y=>b', z=>c ) ) || P(a',b')},
14. \${U, P(x', x )}={((x=>a, y=>b, z=>c), (x=>a', y=>b, z=>c ) ) || P(a',a)},
15. \${U, P(z', x )}={((x=>a, y=>b, z=>c), (x=>a, y=>b, z=>c') ) || P(c',a)}.

### State Spaces with Constraints

If U is not simple then the constraint complicates the above definition, since some of the apparently static variables can depend on the dynamic ones. Suppose that U has a basis (a set of variables that can be changed freely), then z becomes a list of basic variables that are not dynamic in `{(u,v): U><U || u.z=v.z & P(v.x, u.x, v.z) }. So u.z=v.z` is thus less stringent, so the non-basic component values are dynamic and have values that make v lie in U.

### Example 3

If U = Net{x,y,z::T. x=f(y,z). } and w = P(a,b) then (y,z) is a basis for U and:
16. the state is in \$[x:T,y:t,z:T]( x=f(x,y) )
17. \${U, P(x,y)}={((x=>a, y=>b, z=>c), (x=>a' , y=>b , z=>c ) ) || P(a,b) and a'=f(b,c)=a },
18. \${U, P(x,y')}={((x=>a, y=>b, z=>c), (x=>a', y=>b', =>c ) ) || a=f(b,c) and P(a,b') and a'=f(b,c) }.

### Block Structure

The form with{D}P introduces temporary variables into P where D is some documentation and P any simple predicate.
19. For U, X: documentation, R: @(\${U and X}, \${U and X} ), with X (R )::= {(u,v): U><U || for some x:X, w,z: Net{U and X} ( u=w.U and w R z and z.U=v and z.X= x) }. We assume there are unique projection operators (_.U) and (_.X) from \${U and X} to U and X respectively. There is an unexpected property of with - there is no need to initialize its variables. This guarantees, by the definition, that a consistent "initial" value will be selected. For example, |- with{x::Natural }( y=3; y'=y^2+(y+1)^2; x^2=y; y'=x) = (y'=5 ). This simplifies the expression of many specifications - see section 5.4 below for other unexpected benefits.

### Example 4

20. SUM(f,a,b)::=with{i::Nat} ( i'=a; do(i<=b; s'=s+f(i); i'=i+1); i>b ).

### Abuses of Notation

Predicates allow us to avoid thinking in terms of the state of the system [Dijkstra 89, Denning ed. 89b]. We often treat dynamic and static predicates as if they were relations. This assumes that a universe of discourse (state space) is fixed in elsewhere in the documentation. Thus given state space U and predicate w,
21. w::@(U,U)=\${U, w} For U, predicates w1, w2, \${U, w1; w2} ::@(U, U)=\${U,w1}; \${U,w2}, \${U, do(w1)} ::@(U, U)=do(\${U,w1}), \${U, no(w1)} ::@(U, U)=no(\${U,w1}, \${U, w1 | w2} ::@(U, U)=\${U,w1}| \${U,w2}, \${U, w1 & w2} ::@(U, U)=\${U,w1}& \${U,w2}.

### Theorem

22. |-For U, predicates w1, w2,
23. \${U, w1 | w2} = \${U} and (w1 or w2),
24. \${U, w1 & w2}= \${U} and w1 and w2. By leaving U undefined we expand the predicate calculus [Woodcock 89b, compare Gries 91] to include dynamics.

. . . . . . . . . ( end of section Formal Treatment) <<Contents | End>>

## Serial Data

Programs can access data from a source, one datum at a time. They need to put items into a stream of data one at a time. Similar commands send streams of messages to other programs or data into files. My strings and lists can model serial streams of data. The string and list expressions in dynamic predicates mimic reading and writing.

Suppose we are interested in objects in a set A. #A is the set of sequences of objects in A. MATHS defines the following operators (See Chapter 4(section 8), chapter 5 of my monograph, and samples at [ http://csci.csusb.edu/dick/samples/ ] ):

5. s ! t::#A=the concatenation of strings s and t. e ? s::#A=result of putting object e in front of string s. s ? e::#A=result of putting object e behind string s. s:!x ::= ( s'=s !x ) -- put a copy of x 's value at the end of s, x!:s ::= (x ! s = s') -- put a copy of x 's value in front of s [compare Lam & Shankar 90]. e?:s ::= ( e ? s' = s ) -- e was the first item of S and is removed. x'?:s ::= ( x' ? s' = s )-- the first item is popped off s into x s:?x' ::= ( s' ? x' = s ) -- get the last x from s

A small technicality arises [ Footnote3 ]

6. - for inter-process communication there must be an explicit "end of sequence" signal after the end of the data. MATHS uses the word 'end' as a generic "There ain't gonna be no more" message. By convention 'end' is never used as data in a string.

Here is an example Data Directed solution to the problem of replacing double asterisks by a caret symbol[Hoare 78, compare Chandy & Misra 88].

7. star::="*", caret::="^". The sets of possible inputs and outputs are I and O respectively:
8. I::= #( star star | char) end,
9. O::=#( caret | char) end. Where star star corresponds to caret and I to O.

The variables i and o are used for the standard input and output of a program so:

10. program::=(
11. while( i.1st<>end )(
12. if "**"!:i then o:!"^"
13. else x'?:i; o:!x
14. fi
15. );
16. o:!end; end?:i; ).

The above looks deceptively like pseudocode, but may demand unusual coding techniques.

. . . . . . . . . ( end of section Appendices) <<Contents | End>>

# FOOTNOTES

## Footnote1

Originally in Whitehead and Russell's Principia the same form is in Floyd's loop invariant technique and reminiscent of Kleene closure in grammar theory. This definition relies on using higher order quantifiers[cf Guerevich 87].

## Footnote2

In previous publications, I have permitted the co-domain to merely contain the domain. This has some unexpected interactions when the state space is not simple.

## Footnote3

When one process generates the input to another parallel process it is not clear whether an empty input is merely as yet ungenerated data, or if the generator has terminated.

. . . . . . . . . ( end of section FOOTNOTES) <<Contents | End>>