.Open Help with Proofs and Demonstrations
. Introduction
My MATHS notation
is designed to make the validation and proof of software a more
practical process. The idea is that the proofs should be
machine-checkable. However, the real work of
constructing proofs is informal and happens before you start
to write it out properly. This page is about the informal planning
stage and offers some resources that I have found useful over the years, plus
my own thoughts. Carrying out your plan involves more rigorous
techniques and
.See ./logic_25_Proofs.html
proposes a way to use the ASCII code to encode rigorous algebraic and natural
deduction proof methods.
For Internet pointers to automated proof systems, see
.See Automation
below.
There has been a lot of research that makes use of
formal logic of one sort an another. To generate a quick list
of pointers to citations see
.Find LOGIC
For a discussion of some differences between the way people think
and formal logic, see
.See Three Kinds of Inference
below.
For more on how formal methods and creativity complement each other
see
.See Further Reading
below.
. But Why Prove anything Anyway?
I figure there are three or four main reasons:
.List
To convince unbelievers. Many proofs are needed to convince the readers
of a paper or thesis that the writer knows what they are talking about.
In other cases it is the writer that doesn't quite believe a result
and constructs a proof to convince themselves.
To provide an explanation for why a result is true. Proofs are constructed
using assumptions and rules. These form an explanation of the result.
You can then think about and challenge the assumptions. For example, to
make a planned system safer, construct proofs of its safety, and then
challenge the assumptions you used.
To see where the result fits into the other facts. Proofs provide a
structure to a collection of statements. The power of Euclid is the
structure that he provided to geometry reasoning from postulates to
theorems and constructions. The structure in turn provides a mnemonic
framework to help you learn and recall the results.
Working out a proof helps you memorize the result. This is one
reason for all those exercises in Math, Physics, and Computer
Science classes.
Training and exercises. When proofs are assigned as class or home work
they are preparing the student for a future presentations and publications.
.Close.List
. My proofs are too long -- am I stupid or something
No. The proofs that are published are the result of sometimes years of
refinement, pruning, editting, reviews. ...
See
.See http://abstrusegoose.com/230
(Abstruse Goose) and
.See http://spikedmath.com/429.html
(SpikedMath)
.Open Hints
. Analyze before you prove
Follow $polya's structure:
.List
understand the problem;
.Set
What are you given?
Are you trying to prove something or to find something?
What is the goal?
How to relate the givens to the goal?
.Close.Set
Define a notation.
Make a plan;
Carry out the plan;
Review the result.
Write up result...
.Close.List
.See ../samples/polya.html
. Proving both_and
Typically you need to prove the parts first and then combine them
using
.See ./logic_25_Proofs.html#Valid Deductive Steps
. Combine what you know already
Often you can combine two things that you know already into
a more useful result. These steps are based on what are called
.Key Syllogisms
by ancient logicians. Here is the classic example:
.Box
All men are mortal.
Socrates is a man.
:. Socrates is mortal
.Close.Box
In formal logic:
.Box
For all x, if man(x) then mortal(x).
man(socrates)
(-2)|-mortal(socrates)
.Close.Box
or in set theory
.Box
Man ==> Mortal, socrates isa Man,
(-2)|-socrates isa Mortal.
.Close.Box
By medieval times we had a gigantic list of these
.See ../samples/syllogisms.html
and algebra added more:
.Box
x < y,
y < z,
:. x < z.
.Close.Box
But what to combine...?
. Proving universals
If you want to show something that looks like this
For all whosis, if something then whatever
introduce a new variable `whosis` and that `something`
is true and try to show `whatever` as a result.
.Let whosis, something
whatever
.Close.Let
See
.See ./logic_25_Proofs.html#Block Structure
for a way to control the assumptions and results.
. Proving equality and if_and_only_if
Often results of the form `this = that` or `this if and only if that` use
.See ./logic_25_Proofs.html#Algebra and Calculations
to figure from `this` to `that`.
. Proving either_or
Sometimes when working with statements with special cases (P or Q) its
easiest to use
.See ./logic_25_Proofs.html#Proof by Cases
to try out each possibility in turn.
. Proving something exists
Sometimes you can find a suitable example. More often you need to
construct an example out of existing objects using the construction
rules are accepted in your discipline. For example, in Euclid,
you used compasses and a straight edge. These days, we can use
algebra -- or even a calculator. Indeed, in some disciplines,
it is enough to show that an infnite series of approximations to
a desired object get as close as we would like to what we want.
So proving something
exists boils down to
.See Finding something
Sometimes we are forced to go a different route -- assume that an
object does not exist and derive a contradiction. Notice
that a whole branch of mathematics `intuitionstic` rejects this
approach and demands that if you want to calim that something
exists, then you must give a method for constructing an example.
. Finding something
One way of finding something is to show how to `construct` it
from other given things. The terminology goes back to Euclid.
In MATHS we might express a Euclidean construction
like this
For all x:T1, where P1, find a y such that P2.
.Let
x :T1=$given,
(given)|- P1.
y :T2=$goal,
(goal):P2.
z:=....,
z2:=....,
...
y:=...
... |-(desired property): P2.
. Proof of desired property.
...
.Close.Let
.Open Example 1 Euclid Book I Proposition 1
The following is extracted, out of context, from a hypothetical
translation of Euclid into MATHS. As a result some of the definitions
and axioms are missing.
Problem -- To draw an equilateral triangle on a given finite straight line
.Let
AB:FiniteLine=$given.
ABC:Triangle=$goal, equilateral(ABC).
A:=start(AB).
B:=end(AB).
BCD:=circle(A, AB).
ACE:=circle(B, AB).
(ACE, BCD)|- some intersection(ACE, BCD),
C:intersection(ACE, BCD).
ABC:=triangle(A, B, C).
()|-(p1): equilateral(ABC).
. Proof of p1
(BCD,circle)|- (p11): length(AC)=length(AB).
(ACE,circle)|- (p12): length(BC)=length(AB).
(p11,p12)|- (p13): length(AC)=length(BC),
(ABC, p11, p12, p13, equilateral)|- equilateral(ABC).
.Close.Let
For the `real` Euclid see
.See http://aleph0.clarku.edu/~djoyce/java/elements/bookI/propI1.html
.Close Example 1
Algebra (using symbols for unknowns, and expressions to symbolize
the construction of other things, ...) is a powerful tool for finding
something that fits a property.
Algebra picks up Pappus's idea of stating with what you want -- expressed
as an unkown -- and then expresing the constraints on the unkown...
in the hope of transforming these into a known object.
.Open Example 2
For example consider:
The product of four consecutive numbers is one less than a square.
I did a lot of exploration which, when pruned of errors and dead ends,
became:
.Net
Let `n` be a number, then we are interested in this formula:
n(n+1)(n+2)(n+3) + 1
The obvious approach is to multiply this out and then recognise
that the formula is the square of some `obvious` expression. But
when I tried this, the `obvious` escaped me.
If it was multiplied out, it would be a polynomial in `n` with
leading and last coefficients equal to one:
n^4 + ... + 1.
So, if this is a square, it must have form:
(n^2 + a*n + 1)^2,
for some `a`.
When n=1:
(1^1 + a*1 +1)^2 = 1*2*3*4 + 1,
(a+2)^2 = 25,
(a+2) = 5,
a = 3.
So we have found an `a` and so can now go to the rigorous proof
of the simpler result
n(n+1)(n+2)(n+3) + 1 = (n^2+3*n+1)^2.
...
.Close.Net
Notice that the algebra in the middle of the above is not a rigorous proof.
It just discovers a formula that can then be proved
using algebraic techniques -- ones where we can check every step
by the rules.
The proof goes like this... but you can fill in the blank:
.Let n:Int.
n(n+1)(n+2)(n+3) + 1 = ... = ... = ...
(n^2+3*n+1)^2 = ... = ... = ...
(above)|- n(n+1)(n+2)(n+3) + 1 = (n^2+3*n+1)^2,
(-1, m:=n^2+3*n+1)|- for some m, n(n+1)(n+2)(n+3) + 1 = m^2.
.Close.Let
.Close Example 2
.Open Example 3 Construction as a series of definitions
The following is out of context of the algebra of real numbers.
I assumes links to the laws of algebra (which don't
exist in this context).
Notice that I discovered a `given` property ($dnz) in the process of
drafting the construction.
.Let
a,b,c:Real=$given.
(given)|- (anz): a<>0.
x:Real=$goal, a x^2 + b x + c=0.
d:=0.5/a.
e:=b d.
\Delta:=e^2- 2 d c.
(given) |- (dnz): \Delta>=0.
(dnz)|-
s:=sqrt(\Delta).
x:=(-e + s).
(anz, d, e, \Delta, dnz, s, x) |- (qe): a x^2 + b x + c=0.
. Proof of qe
a x^2 + b x + c
(x) |-
=a (-e + s)^2+ b (-e + s) +c
(square)|-
=a (e^2 - 2 e s + s^2)+ b (-e + s)+c
(s)|-
=a (e^2 - 2 e s + \Delta)+ b (-e + s)+c
(\Delta)|-
=a (2 e^2 - 2 e s - 2 d c)+b (-e + s)+c
(distributive)|-
=a 2 e^2 - 2 a e s - 2 a d c + b (-e + s)+c
(distributive)|-
=2 a e^2 - 2 a e s - 2 a d c -b e + b s+c
(e, d)|-
=2 a b^2 d^2 - b s - c -b b d + b s + c
(commutative, zero)|-
=2 a b^2 d^2 - b b d
(power)|-
=2 a b^2 d d - b^2 d
(factor)|-
=(2 a d -1) b^2 d
(d)|-
=0 b^2 d
(simplify)|-
=0.
.Close.Let
.Close
. Proving something doesn't exist
About the only way to tackle this is to assume that the
thing in question exists and show that if it did then impossible
things happen. Sometimes the end point is a simple contradiction.
This is called `reducto ad absurdum` -- in Latin.
.See ./logic_25_Proofs.html#Reducto ad Absurdum
In other cases you show that if an `A` exists then some other object
`B` can be
constructed from A,
and this object `B` we already know doesn't exist:
.Let Some A exists.
Construct B from A.
(-1)|- Some B exists.
(previous)|- B does not exist.
.Close.Let
(-1)|- A can not exist.
This kind of proof is popular in computer theory texts -- it is called
.Key reduction.
. Stuck?
A weird but valid trick (when stuck?) is to start by denying the truth
that you want to establish and show that the result is absurd:
.See ./logic_25_Proofs.html#Reducto ad Absurdum
is the Latin term for this...
If you are really stuck for ideas, then try relaxing and playing with
various
possibilities: `what if`, `Suppose`, `perhaps`,...
.See ./logic_25_Proofs.html#Trying Ideas Out
This is a traditional technique for mathematicians with a problem to solve.
Often when you've worked on a problem long enough the key idea will
come to you "out of the blue" while getting onto a bus or shopping.
Sometimes it helps to loosen up and do something silly. An example
is $DST
.See The Dilbert Salary Theorem
which was recently proved after years of diligent research.
Another strategy is to browse the ready made
.See ./logic_25_Proofs.html#Proof Templates
out there waiting to be used. Every field has it's own proof methods.
It is, therefore, well worth studying these in preparation to publishing
or presenting a proof.
. Practice
It pays to practice the techniques on lots of unimportant problems
just to train the brain. Hence the many exercises in good math classes.
I have found some lovely light hearted examples in
the Internet Futility Closet like this example
.See http://www.futilitycloset.com/2012/06/27/readership/
of a simple syllogism and a classic informal but MATHS-style proof.
. Read
Finally: There are many excellent books that can improve your proving and
problem solving skills:
.See Further Reading
below.
.Close Hints
.Open Further Reading
. Polya -- How to Solve it
(polya):
.Source George Polya, `How to Solve It: A New Aspect of Mathematical Method`, Princeton University Press, Paperback 0-691-02356-5 (USA).
This is worth reading just for the entry under the heading "Mathematics
Professor, Traditional". The book lays out a program for solving problems and
proving results. It has been known to increase grades a whole point.
It balances rigor and creativity in an effective procedure. Polya
suggests using four phases to solve problems:
.List
Understand the problem
Work out a plan for a solution
Carry out your plan
Review the result
.Close.List
In the first step you get a very clear idea of what you have been
given and what your goal is and invent some notation.
In the planning phase, use plausible, analogical,
and even lateral ($debono)thinking. Carrying out
the plan is made of rigorous and formal steps. But he thinks we
should aim for
steps that are both intuitive and valid. In the last phase
you look for ways to reuse the results and plans as well as improvements
and variations to the solution.
For more details on Polya's framework for problem solving see
.See ../samples/polya.html
which is my reduction/expansion/editing of his scheme.
In the MATHS language the `Net` construct can be used for setting up
a collection of givens and a goal and then working out a plan:
.As_is .Net
.As_is name::type=given, meaning.
.As_is ...
.As_is (given)|- property.
.As_is ...
.As_is (goal): property.
.As_is ...
.As_is .Close.Net
. Conceptual Block Busting
.Source Wayne Wyckelgreen, `Conceptual BlockBusting`,??
How and why we get stuck, and how to get unstuck. Full of practical ways of
thinking around problems.
. Lateral Thinking
(debono):
.Source Edward De Bono
Just about everything Dr. De Bono has written is easy to understand. It
de-mystifies how your brain works and how to use it. He shows
how ideas can be created, modified, and put to use.
See
.See http://www.debono.ws
as a starting point.
. Three Kinds of Inference
Sadly the following book
.See [SebeokEco83]
is not easy to understand and sometimes tends to
mystify you... but it does define the way we tend to think quite well while
writing about
.Key Sherlock Holmes,
Charles Pierce, and C. August Dupin.
According to
.See [SebeokEco83]
we can distinguish three kinds of inference that people,
detectives, doctors, scientists, philosophers, etc. tend to
make. Only the first of these is $valid. But they are all commonly used.
We start by labeling three statements:
(Rule): All the beans from this bag are white.
(Case): These beans are from this bag.
(Result): These beans are white.
Here are three types of inference... written in the MATHS format
.As_is (premises)|- conclusion.
.Box
(Deduction): (Rule, Case) |- $Result.
(Induction): (Case, Result) |- $Rule.
(Abduction): (Rule, Result) |- $Case.
.Close.Box
Which do you think is "common sense"? Which would you trust most?
Which one do you find Sherlock Holmes using in just about every case?
For a full discussion of Sherlock Holmes's logic please see
.See http://philsci-archive.pitt.edu/5306/
Sosshichi Uchii's paper "Sherlock Holmes on Reasoning" at
.See http://philsci-archive.pitt.edu/
the University of Pittsburg's PhilSci-Archive, or
.See http://www1.kcn.ne.jp/~h-uchii/Holmes/
(1991) at the University or Kyoto.
The $Abduction reasoning is proposed as a
.Key Sign of Progress
(Page 181).
or
.Key Heuristic Syllogism
(pages 186-190)
by $polya:
.Box
If we are approaching land, we often see birds.
Now we see birds.
Therefore, probably, we are approaching land.
.Close.Box
(Page 181).
These arguments can made rigorous by using the Bayesian model of probability.
Here a value from 0..1 indicates the degree of belief we have in a proposition, and
there are formulae for recalculating our `beliefs` as new data is collected. For
more see
.See ./math_81_Probabillity.html#Bayes
my notes on how this works out using Bayes's Formula.
. Proofs and Refutations
(Lakatos): Imre Lakatos (edited by John Worral and Elie Zahar), Proofs and Refutations, Cambridge University
Press 1976
This book is full of ideas. The author replays an advanced
seminar in mathematics where the students duplicate the key points
in the history of a particularly elegant theorem in the theory
of polyhedra.
He makes it clear that proofs are often refuted. Mathematics
grows by a process of proof, refutation, and reproof. Indeed,
`the exception improves the rule` more often than not.
The author wanted to decrease the dogmatism of mathematics,
and in particular the dogma that mathematical knowledge grows
by deducing truth from known truths. He shows that actual mathematics
develops in a illogical fashion because it grows by discovering
the underlying assumptions as well as by working from these to
prove conclusions.
. Proofs come after the thinking and experimentation
p.88 of
.See [Hogben37]
.Box
There is always a danger of carrying this so far that we get lost in a maze
of symbols. The mathematician then does what the sensible editor does. He
brightens up the discussion by bringing in a cartoon. He goes back to his
hieroglyphs, to geometry, whenever he can. To go back is not quite the
right expression, because new kinds of hieroglyphs such as "graphs" have
been devised (like business advertisements) to give a vivid picture of
operations which could not be conveyed by the earlier kinds.[...]
This tendency of number language to come back to the firm earth of figures
and models, when doubts arise, has played a great part in building up the
rules of what mathematicians call proof. In one sense proof has much the
same relation to discovery as art criticism has to authorship. Mathematical
truths are usually discovered by experimenting with individual cases first
and "proved" afterwards. The proof is then an obituary notice on a
praiseworthy statement. It exposes how it is connected with all the other
mathematical rules we have learned to use. As we are always learning other
rules, the methods first used to justify a rule in mathematics are nearly
always held to be lacking in "rigour" by mathematicians who come later.
This is how one of our most abstract mathematicians, Professor G. H. Hardy,
speaks of one of the most abstract branches of mathematics: "The theory of
numbers, more than any other branch of mathematics, began by being an
experimental science. Its most famous theorems have all been conjectured,
sometimes a hundred years or more before they are proved; and they have
been suggested by the evidence of a mass of computations"
.Close.Box
. Problems with Word Problems
starting with p.304 of
.See [Hogben37]
.Box
What is called solving an equation is putting it in a form in which its
meaning is obvious. The rules of algebra tell us how to do this. The really
difficult step consists in translating our problem from the language of
everyday life into the language of algebra. At this point the mathematician
himself may go astray, because the mathematician may have less opportunity
for understanding the problem as stated in the language of everyday life
than the plain man who is not a mathematician. Once a problem has been
translated into a mathematical sentence (or equation), we can safely trust
the mathematician to do the rest. What remains is a problem in the peculiar
grammar of mathematics. The danger lies in trusting the mathematician to do
the translation.
.Close.Box
. Proofs Without Words
Mathematicians have been collecting these and the
USA Mathematical Talent Search site
.See http://www.usamts.org/
has a gallery
.See http://www.usamts.org/Gallery/G_Gallery.php
of animated proofs.
Somebody
.Hole Visual Euclid
published a book of proofs from Euclid expressed using colored pictures.
Some of these have been animated as Java Applets.
. The Logic Book
.Open BergmanMoorNelson90
Merrie Bergman & James Moor & Jack Nelson
The Logic Book (2nd edn)
McGrawHill (1990) ISBN 0-07-909524-0 LC# BC135.B435
=TEXT FORMAL LOGIC NATURAL DEDUCTION TREES SEMANTIC TABLEAUX
.Close
.Close Further Reading
.Open Motivation and Theory
. Arguments
Arguments move from assumptions to conclusions. If $valid (logical)
then the truth of the assumptions forces the truth of the conclusions.
However, most so called arguments, leave out many assumptions. Searching
these out can help discover one that is not true and so lead
to a discovery or a creative leap. This is helpful when you get
stuck. Formalizing the arguments that got you stuck will often
reveal an assumption that you don't want to make.
Even more intriguing: if a valid argument leads to a known truth
then it does not prove the truth of the assumptions. It can also
strengthen ones belief in the assumptions -- but this is only a
matter of probabillity not proof.
An argument
never proves anything positive about assumptions. It can show
something positive about a conclusion however.
When a set of assumptions leads to a false conclusion then one
of the assumptions (at least) is false. Thus arguments help
us question our assumptions. Hence they lead us to revise them
until we have a set the do not contradict `reality`. The
revision process, by the way, is not a logical process, but
a `dialectical` one -- the steps are error prone. Indeed we
may need a way to record the counter-arguments or rebuttals that
develop in a real argument.
.Open Quick Example
As a quick example: if we assume (informally) the first
three statements below then the final statement follows
from the assumptions:
.Let
All birds can fly,
All penguins are birds,
No penguin can fly.
()|-No penguins.
.Close.Let
So when we meet a real penguin and it does not fly,
we need to revise our assumptions.
Perhaps penguins fly under water? Perhaps some birds are flightless?
Perhaps we have mis-classified penguins as birds? Choosing
the right one to change is not a logical process but you can
probably do it intuitively... Or if we are scientific
we spend more time studying birds:
sparrows, ostriches, crows, kiwi, canaries, cassowary, and so on,
We may end up with
birds divided into normal_birds and flightless_birds.
.Let
All birds are normal_birds or flightless_birds,
All normal_birds can fly,
No flightless_birds can fly,
()|- No normal_birds and flightless_birds,
All penguins are birds,
No penguin can fly.
()|- All penguins are flightless_birds.
.Close.Let
In every day reasoning we are used to arguments that can be defeated
by other arguments. Many of these are like the above and can be resolved
by refining our assumptions. There has been research into the structures of
these defeasible, dialectic, or plausible arguments: see
.See [ChesnevarMaguitmanLoui00]
for a history. Handling contradictory assumptions is also a key part
of requirements analysis in developing software and there have been attempts
to develop logics that help in this task. See
.See [HunterNuseibeh97]
.See [Nuseibehetal94]
.See [Finkelsteinetal94]
etc. for example. You
can get a quick bibliography of recent work `consistency` by using
my search engine:
.Find [Cc]onsisten
In MATHS we can document rebuttals simply:
.But
Penguins can't fly.
.Close.But
Rebuttals should be put after an argument that is not a full proof
or a statement. Several rebuttals may be put after a statment and they all
attack that proof/statment.
.Let
It is raining.
If it rains I get will get wet.
|- I will get wet.
.Close.Let
.But
I carry an umbrella.
|- I will not get wet.
.Close.But
To rebut a rebuttal, place it inside the first rebuttal.
.Let
It is raining.
If it rains I get will get wet.
|- I will get wet.
.Close.Let
.But
I carry an umbrella.
|- I will not get wet.
.But
The wind is blowing.
|- I will get wet.
.Close.But
.Close.But
In a sense the penguin problem arises becasue of the rarity of penguins
and other flightless birds. If more birds did not fly we wouldn't have
assumed that "all" fly.
There have been attempts to develop
reasoning based on statements like "All birds normally fly".
In mathematics, some thing analogous happens. You will see
statements like:
.Set
"In all but a finite number of cases,...."
"Everywhere, except on a set of measure zero, ..."
"For large enough values, ...."
"In the normal case,...."
.Close.Set
MATHS does define some quantifiers to handle these.
.See http://www.csci.csusb.edu/dick/maths/logic_11_Equality_etc.html#Advanced%20Quantifiers
Probability and statistics are the branches of mathematics
that are designed to handle these problems. Statements like
`90% of all birds can fly.`
allow us to reason about uncertain situations. But they
are essentially a more complicated than the propositions
on this page. However, we can make a theory of probability
using the simple logic discussed here. See
.See http://www/dick/maths/math_81_Probabillity.html
.Close Quick Example
. Absurdity and Contradictions
A logistic system can only be trusted if it does not lead to contradictory
or absurd propositions. If the proof rules of MATHS are followed this
should not occur. Natural deduction systems work by dividing up the
current situation into special cases and showing that the majority of
these (even all of them) are contradictory. If all are contradictory then
the assumptions can be rejected. If some are open then the assumptions are
taken to imply one or more of these conclusions.
By Murphy's Law, it will happen that MATHS users will construct
documentation that is inconsistent. The symptom of this is when a
proposition and its negation are derived within a piece of documentation.
There has been a lot of work done in the 1990s on inconsistent requirements
for software. When talking to people about what they want software
to do it is common to find large numbers of small and large contradictions.
It is wise to do something about these when they are discovered.
In any case a contradiction that occurs in MATHS is anything but
a disaster!
It is an opportunity to form a conclusion. It proves something. The
correct thing to do is to close off the conclusion and the assumptions and
declarations leading to the contradiction into a "set of documentation"
that proves a result. Thus by simple editing the whole is protected from
inconsistency. The result is a new theorem!
A historical example of a project that failed through not doing this are
the Pythagoreans who failed as mathematicians when they killed the man who
proved that the assumption that `sqrt(2)` was a ratio lead to a
contradiction. They could have proceeded by making this into an
demonstration that there must exist numbers that are not ratios. Instead
they preferred to support a mystical theory which they knew to be unsound.
There are no Pythagoreans left today.
Imre $Lakatos studied the detailed behavior of mathematicians when faced
with a proved theorem and a number of exceptions to it in the 19th
century. From this he worked out an interesting heuristic (a guide
for the in expert). A valid proof is(by definition) one that, given true 'givens'
guarantees the truth of the conclusion. Now most proofs
are some what incomplete but it is possible (by hard work) to
construct such iron chains leading from assumptions to conclusions.
In any case, $Lakatos pointed out that this means that if the conclusion
contradicts some other fact you should examine what you have taken
as `given`, what you have assumed in the proof, to which of these
are also contradicted. The assumptions that conflict with the
new data can now either be improved, or they may have to be
incorporated into the statement of the theorem. In summary,
a proof plus a refutation improves the theory. See
.See Further Reading
Absurdity happens! You will find that your cherished and
documented assumptions leading to conclusions that do not fit with
observation. Again this is an opportunity not a problem! Science
takes a leap forward whenever reality destroys a theory. First the data
has to be checked and then the theory may have to be modified.
For example the first evidence for a hole in the ozone layer was rejected
by NASA because the values couldn't happen - it had to be "experimental
error". Astronomy is full of examples of absurd ideas being rejected
that were accepted later.
There is an early map of the stars that someone drew which shows
that one star had been erased and put near by - because a later observation
showed it in a new position. The star was one of the unknown outer
planets. Kepler wrote and crossed out "These points might as well fit an
ellipse", and delayed his discovery of "Kepler's Laws" by several years.
Because "action at a distance" was repugnant to Descartes'
philosophy,
his followers later rejected Newton's law of gravitation. For more
on the history of astronomy see:
.Source Koestler 59 , Arthur Koestler, "The Sleepwalkers"
Acknowledgment:
.Set
I'd like to thank m.dolbear@lineone.net for advice on redrafting the above
paragraph in September 2001.
.Close.Set
An engineer is in trouble when reality shows up a misunderstanding -
because the system often becomes unsafe
or ceases to work. The wise engineer takes
time to scientifically examine and model the situation prior to searching
out the problems and solving them. No work is started until a mathematical
analysis proves that the system is safe (within a generous "safety
factor"). The wise engineer also finishes up by making experimental
prototype versions first. As an example, in the UK there were once two
rival groups designing an airship. One was designed on an elegant
mathematical model and known to be safe. The other was "hacked" together -
despite evidence that it would fall to pieces. The properly engineered
system flew, the other one was the R101 and crashed in flames - breaking up
as predicted
.Source Nevile Shute, "Sliderule".
However never totally discard a theory - document the context in which it
did/does work and file it under - "It seemed like a good idea at the time...".
.Close Motivation and Theory
.Open Automation
Most interesting logics have no automatic decision procedure
that generates proofs of any given statement. However, there
exist many small logics for which there are automatic tools.
For example, formulas in the propositional calculus can be
proved by several algorithms:
.Table
.Row Truth Tables (Mil Post)
.Row Proof_by_cases+Indirect (Jensen)
.Row Tree structured analysis
.See ./logic_27_Tableaux.html
.Row Boolean Algebra (Boole)
.Close.Table
Notice however that except in the simplest cases the problem
is `intractable`. If we have `n` propositional
variables the time to prove/disprove
a statement increases as at least the exponential on `n`.
The problem harder when we have predicates and equality to worry
about. Indeed it has been shown to be unsolvable!
For practical evidence of the difficulties of proving some theorems,
consider that some (Fermat's Last Theorem for example) took
400 years to prove. Indeed sometimes a proof could only be discovered
by somebody who is not searching for it. The proof lies within a
separated field of study to where the theorem appeared!
Tools and notations can clear the
logical underbrush and allow the necessary creative steps
to be made.
There are tools on the WWW. For example there is an
automatic theorem prover for many
non-classical `propositional` logics:
.See http://lwbwww.unibe.ch:8080/LWBinfo.html
I have developed a very simple Prolog program to evaluate,
test
and disprove boolean expressions:
.See http://www/dick/cs320/prolog/bool.plg
and in some courses students are asked to write a
program to generate semantic tableaux:
.See http://www.phon.ucl.ac.uk/home/hans/courses/plinc101/project2.html
Full blown `theorem provers` tend to need human intervention and
guidance. The record for humans trying to prove a theorem is
approximately 400 hundred years.
One popular tool for computer people is the Prototype Verification System:
PVS::=http://pvs.csl.sri.com/,
at SRI.
.Hole Links to your tools can be put here.
.Close Automation
.Open The Dilbert Salary Theorem
(physics, proverbs)|-(DST): Engineers and scientists can never earn as much as administrators and sales people.
The following proof was supplied to Tom and Ray
.See http://cartalk.cars.com/
by From Kate Fowler,
Denver, Colorado. As an informal proof
.See http://cartalk.cars.com/Mail/Haus/2000/04.29.html
in April of 2000.
. Proof of DST
.Net
(physics)|- (dst1): Power = Work / Time.
(proverbs)|- (dst2): Knowledge = Power.
(proverbs)|- (dst3): Time = Money.
(dst1, dst2, Power => Knowledge)|-
Knowledge = Work/ Time,
(dst3, Time => Money)|-
iff Knowledge = Work/ Money,
( * Money, / Knowledge )|-
iff Money = Work/ Knowledge.
(-1) |- knowledge approaches zero, money approaches infinity, as long as some work is done.
(-1)|- (Conclusion): the less you know, the more you make.
.Close.Net
. Exercise on the DST
So, why is the above argument invalid? And why is the conclusion
still OK?
.Close The Dilbert Salary Theorem
.Open Glossary
above::reason="I'm too lazy to work out which of the above statements I need here".
given::reason="I've been told that...", used to describe the given part of a problem or a logical system.
goal::theorem="The result I'm trying to prove right now".
let::reason="For the sake of argument let...", many times a proof starts
by assuming something called the hypothesis,
.See ./logic_25_Proofs.html#Block Structure
hyp::reason="I assumed this in the last Let".
RAA::reason="Reducto Ad Absurdum".
reason::="Something that often appears as the evidence input to a step in an argument."
.Close Glossary
.Open Comments and Reviews
Open for business! Please tell me what you think of this
page. Send Email to `dick` at `Csci CSUSB edu`.
.Close Comments and Reviews
. Glossary
argument::=`A sequence of steps that go from a set of assumptions($given) to a conclusion($goal)`.
valid::=`Something that can not produce a false conclusion from true givens`.
There are valid arguments, valid proofs, and valid steps.
.Close Help with Proofs and Demonstrations