This document describes the ways in which mathematics is more than traditional logic: [ logic_0_Intro.html ]
If you want to get to the specific techniques that make mathematics be more than logic, see [ Algebra ] and the sections that follow it.
Source: Howrad W Eves, "In Mathematical Circles", Boston: Prindel, Weber and Scmidt, 1969, p52. Quoted in Am Math Monthly, V102n5(May 95)p437.
"The realisation began as an academic exercise without reference to personalities; Smiley manoevred the characters like pieces in a puzzle, twisting them this way and that to fit the complex framework of established facts -- and then, in a moment, the pattern had suddenly re-formed with such assurance that it was a game no more"
Source: Le Carre 62, "Call for the Dead", page 149 of "The Incongruous Spy"
At the end of the 19th century, there was aprogram to put mathematics on a formal logical basis culminating in Whitehead and Russel's monumental three volume work "Principia Mathematica"[Whitehead and Russel 64]. I think that this attempt gives experiemntal evidence that mathematics is more complex than most philosophers can handle. The work of Goedel, Church, and Turing together show that any form of mechanical or formal reasoning will be either uninetersting, inconsistent or incomplete. And to this day even the most formal mathematical papers are (to a large extent) written in unformalised language.
In what follows we look at some particular extra-logical features of modern mathematics.
In text books the syntax looks like this
e1 R1 e2
R2 e3
R3 e4
...
R[n-1] e[n]At then of such a chain of relationships we can conclude that the combination (relational product) of all the relations holds between the first and last expression:
e1 (R1; R2; R3; ...) e[n].or in shorthand:
e1 ;R e[n]
This inference is valid, since we can show:
Often all the relations and the conclusion are the same transitive relation. The identity relation("=") or logical equivalence("iff") are both very common and useful. The MATHS syntax is in [ algebraic_step in logic_2_Proofs ]
There is no general algorithm for proving the equality of two formulae in a general logic with equality formulae [Post...]. There are procedures for converting some sets of equalities into effective and efficient algorithms for testing for equality - eg
Source: Knuth & Bendix "Simple word problems in Universal Algebras" pp263-297 in Cl Probs in Abs Alg Leech 1970.
Source: Nelson G & Oppen 80, G Nelson & D C Oppen, Fast Decision procedures based on congruence closure, J ACM V27(Apr 80)pp356-364.
Source: Smith A 90, Alf Smith, The Knuth-Bendix Completion Algorithm and its Specification in Z, pp195-220 in Nicholls 90.
Let x be ....and finishes
and so x=...
Source: Grabiner 95
Often a formula like f(x)=a where f(x) is an expression containing x as a free variable and e is an expression, is transformed into one like x=e and the mathematician concludes "That e is the solution to f(x)=a". But this is not always enough because, if there is no x such that f(x)=e, then assuming that x exists lets you deduce any conclusion including that x=a. So having derived x=a we should check f(a)=e.
In general, suppose that P(x) is assumed and x=e derived. Then you have shown that if an x exists that satisfies P then it must be x:
(1): if for some x( P(x)) then x=e,
In formal terms we have the following theorem - if whenever x satisfies P it is e, then either e satisfies P or no x can satisfy P:
Or - either there is no way to satisfy P or we have found it.
Hence it is valid to infer
(2): e=the x(P(x)) or for no x(P(x))
from
(3): for all x(if P(x) then x=e).
Typically P(e) is assumed to follow, however this step is not always valid. Only by assuming or showing for some x(Px) can we safely deduce P(e) from For all x, if P(x) then x=e. Thus 'solving an equation' does not guarantee that the solution exists. Of course if a value is deduced and if it satisfies the original properties then we have
Hence
Sometimes (ideally) the derivation of the solution is reversible - each step is an iff-step. Then solving the equation works because the following formula is true and indicates a valid argument:
The argument is a classical algebraic or calculational proof that looks like this
Let{ x:T.
f(x)= a
iff e1=e2
iff e3=e4
...
iff x=e[n]
} |- f(e[n])=a.
In MATHS we would include labels or explanations to guarantee the above format.
Let{ (e0): x:T.
(e1): f(x)= a
(e2): iff e1=e2
(e3): iff e3=e4
...
(em): iff x=e[n]
} |- f(e[n])=a.
for i:Nat0, some x:X(i+x=0) -- leads to signed integers(X:=Int)
for i,j:Int, some x:X(i*x=j) -- leads to rational numbers(X:=Rat)
for a:Rat, some x:X(x*x=a) -- leads to irrationals, and hence the Real numbers
for all a:Real, some x:X(x*x=-1) -- leads to imaginary and complex numbers
The ideal problem+solution would look like this
proof the x=3 is a solution.
Let
Proof that x=4 is a solution. Left as an exercise for the reader.
Also see [ ../samples/fish.html ] for a simple example.
Example of solving an equation given data/parameters.
(ex2): solve{ x:Real, a*x^2-b*x+c=0} given{a,b,c:Real}
(ex3): = map[a,b,c:Real]{ (-b-sqrt(b^2-4*a*c))/(2*a), (-b+sqrt(b^2-4*a*c))/(2*a)} .
. . . . . . . . . ( end of section Proposed Syntax for Solving Problems) <<Contents | End>>
Mathematical texts and papers over the years demonstrate that there is no harm in less than rigorous syntax. Bourbaki calls these "Abuses of notation". For example - An unbound variable in a proposition is automatically assumed to be unversally quantified to some local default type.
There are two common and contradictory idioms used to talk about algebras in text books:
In general the name of a set of tpls is used as a name for the type of the first element - this seems to be a harmless piece of overloading, which we formalize as follows:-
If the first declaration in a set of documentation is for a variable of type Sets:
(x=>X, y=>b, z=>c ...) : xyz -- declares variables(X,b,c,...)
(X, b, c, ...) :xyz -- declares same variables
X : xyz(y=>b, z=>c, ...) -- declares an X (Given b,c,...)
X : xyz(b, c, ...),
X : xyz, -- declares (x=>X, y=>y, z=>z,...) : $XYZ
Further we allow sets in both these forms:
or
or
or
A piece of documentation can define a name to be a tpl of declared terms, then all the terms can be substituted implicitly. Suppose we have
u::X, v::Y,w::Z
a::=...
P...
N::=(x,y,z), N=(u,v,w)is equivalent to
u::X, v::Y,w::Z
a::=...
P...
x=u, y=v, z=w,
N::=(x,y,z)
Original Relation Transformed
128 * 4 = 2^7*2^2
7+2 = 9
Solve
9
2^9 =
512
The general form of logarithm to base b is:
Numbers Greater than Zero <-> Numbers
1 <-> 0
b <-> 1
Multiplication <-> Addition
Division <-> Subtraction
Exponentiation <-> Multiplication
Roots <-> Division
Less_than <-> Less_than
Unequal <-> Unequal
More advanced maps are the Laplace and Fourier transforms. A computer proffessional also understands intuitively that decimal numbers and binary numbers are transforms of each other that preserves the arithmetic of the underlying numbers.
In other words a complicated domain can sometimes be transformed into asimpler one without destroying the essence or structure of the original domain. For this process to be worthwhile there must be an easy way to transfer expressions and values across the map. Typically a set of tables are prepared (not an easy task) that can be searched quickly and easily. However for this to be valid the map must preserve the structure of the domain.
One of the ways of distinguishing a logician from a mathematician is that (by and large) a logcian is most interested in a single set of axioms whereas the mathematicain is concerned with the relationship between two or more sets axiom systems. One of the commonest echniques and ideas of the twentieth century scene has been the study of sets of maps that transmit or preserve the structure of the objects that they connect.
The very word 'map' indicates an object that reflects in a grossly reduced and simplified some of the properties of the real world. The properties are those that are useful to the owner of the map - lengths are to scale, directions similar, and colors on paper refect something about what is in the equivalent places in the real world - blue often means - a place of wetness, for example. Thus a map guides its user in the real world with out being the real world.
In general a structure preserving mapping is called a 'morphism'. They come in a wide variety of flavors. Some standard flavors will be found in [ math_11_STANDARD.html ]
The Grand Unified Theory of all systems and morphisms is called Category Theory. Some have called this "general abstract nonsense" but it turns out that category theory provides a common language and approach to any logistic system. Category theory is the ultimate map - it guides its user in a world of abstract structures.
. . . . . . . . . ( end of section Mathematics) <<Contents | End>>
Proofs follow a natural deduction style that start with assumptions ("Let") and continue to a consequence ("Close Let") and then discard the assumptions and deduce a conclusion. Look here [ Block Structure in logic_25_Proofs ] for more on the structure and rules.
The notation also allows you to create a new network of variables and constraints. A "Net" has a number of variables (including none) and a number of properties (including none) that connect variables. You can give them a name and then reuse them. The schema, formal system, or an elementary piece of documentation starts with "Net" and finishes "End of Net". For more, see [ notn_13_Docn_Syntax.html ] for these ways of defining and reusing pieces of logic and algebra in your documents. A quick example: a circle = Net{radius:Positive Real, center:Point}.
For a complete listing of pages in this part of my site by topic see [ home.html ]
For a more rigorous description of the standard notations see