Ch 8.5 pp 352353  Recursively Enumerable Languages
What is a recursively enumerable language?
The word dates back to the older theory of Recursive Functions when a
recursively enumerable language was a set of strings
that could be generated by a recursive procedure.
This name is now given to languages that are accepted by some Turing Machine 
even if rejection some times happens by the machine never coming to a
conclusion  terminating.
A
recursive
language was a language could be recognized by a recursive function.
These days  by a TM that either halts rejecting the language or
accepts the language(and halts).
One of the key results of the old recursive function theory
(we'll research this later...)
was that recursive definitions that did not have to
terminate on all inputs
were more powerful than those that always halted.
Ch 8 pp 346 How do we simulate a semiinfinite tape into a two way infinite tape
I call this trick/technique folding the table. There are
two tracks: Upper and Lower and we have a stored value:{U,L} indicating
which is which. The lower track is backwards!
If we had a twoside tape like this
tape  ...  x  y  z  a  b  c


head            ^    

then an equivalent one sided TM is like this:
upper  a  b  c


lower  *  z  y  x

head    ^    

store  U

If we had a twoside tape like this
then an equivalent one sided TM is like this:
upper  a  b  c


lower  *  z  y  x

head    ^    

store  L

Also note the trick of pseudoblanks.
Two stacks make a TM.
Ch 8.5.2 pp 349  Multistack Machines
What's the difference between B the blank symbol and $ the endmarker?
A TM doesn't need an "end of input" symbol because it's input has no
end..... (but is mosly blank)..
The $ end marker is only
found after the end of the "real" data input to the acceptor.
A blank might appear in the middle of TM data.
How exactly does a multistack machine work? I understand it is like a PDA,
but where does it exactly differ?
It can have more than one stack to store intermediate results. Plus
it has operations that can pop data off one stack and push it onto
another stack.
Could you explain Theorem 8.13 in more detail, and give an example using the theorem.
In class I ran some steps in which two stacks simulated a TM shown earlier
in the book.
Ch 8.5.3 pp 351  Counter Machines
Can you demonstrate counter machines using pictures, to give a better understanding?
In class I ran one the exercise solutions. Tracing the two counters
Think of counter machines as programs with unsigned long int variables.
It only uses statements like this:
i++;
if(i!=0)i; else exit(FAILURE);
Plus the ability to test two conditions for each variable:
i==0
i!=0
But remember these variables can get as big as we need.
Ch 8.5.3 pp 351  Counters
What happens if a program does try to subtract from a counter that is at 0?
I think that the acceptor halts and rejects the input string.
You don't need to learn about CFL's for this course. They
will not be mentioned in the final.
The trick of using the prime numbers to encode ntuples into
a number is called Godel numbering.
There is a misprint in part b that I found on the web site for the book.
The correct language is:
 { 0^n 1^m  m>=n>=1 }
Hint.
 The solution for a is not on the web  you can get credit for trying it.
 Don't attempt the !! exercises.
 Look at the solution to * c before you attempt b.
Ch 8.5.1 c pp 354  Counter Machines
Would you please explain how to use two counters to describe the language represented by problem 8.5.1 c?
I used my solution to this (based on the web solution) as my
simulation of a counter machine, above.
8.6.1 Simulating a Turing Machine by Computer 355
Ch 8.7 pp 362  Words of Fixed Maximum Length
Reading the book's explanation on why it allows words to grow by 1 bit at
a time, I'm a little confused by why the "more liberal" approach
simplifies their proof. Does forcing words to be a fixed maximum length
actually reduce the asymptotic complexity of the simulation of a computer
by a Turing machine, or is the complexity the same as their
"more liberal" approach?
If the words don't grow then you have a finite state machine.
Then all you have to worry about is the secondary storage: disks and tapes.
But the time the simulator requires depends on how much data is stored: TMs
don't have random access. So the book works out a way that allows
for this to grow  but not too fast. In effect they are allowing for
the fact that real computers take longer to multiply and divide numbers
than they do to add, subtract, copy, etc. numbers.
Can you go over figure 8.22 and explain how a TM simulates a computer?
I did a lot of hand waving...
Ch 8.6.2 pp 356361  Are there things a computer can do that a Turing machine cant ?
We had a fun debate: My summary: a TM is not a real machine
so has none of the defects of real computers: breakdowns, abuse, pornography,
upgrades, ... even tho' we can simulate a lot of these by adding a state. For
example [blue screen of death].
Ch 5, ch8 pp 361362  Running times of computers and TM
I got a bit confused with this section. Is it possible for you to go over
the running time differences?
Again: hand waving.
Fewer points if delivered after this message