A Turing machine has a finite number of states in its CPU. However,
the number of states is not always small (like 6). Like a Pentium
chip we can store values in it  as long as there are only finite
number of them. For example all real computers have registers
but there are only a fixed number of them, AND each register
can only hold one of a fixed (and finite) number of bits.
Example: a simple computer with control unit holding a PC
with a 16bit address + arithmetic unit holding one double
length 64 bit Arithmetic Register. This has stores 40 bits
of information plus a small internal control state:
{fetch, decode, compute, store, next} perhaps. So we
have a finite state machine with 2^80 states!
It is sad that we don't have a high level language for TMs then
we could write statements like
x=tape.head;
perhaps.
Suppose that we wanted to code (in C++) a TM with states 'q0', 'q1',... plus
storage 'x' we might write code like this
q0: b=tape.head; tape.move_right(); goto q1;
q1: if(tape.head()==B) return FAILS;
if(tape.head()==x) tape.move_right(); goto q1;
...
I found it very confusing, the idea of storage in states. Could you go over
this kind of problem? And is it beneficial to convey the transition
function in table format, as it was done in sections 8.1 and 8.2, for this
example and what would the table be in this case?
A typical transition table when data is being stored has entries like this
State  Read  Next  Write  Move


(q0,B)  1  (q1,1)  1  R

(q0,B)  0  (q1,0)  0  R

We might interpret this as part of a step that stores the
symbol read from the tape in the CPU. Note: the book uses "[]"
where I use "()".
Tabulating the 6 states >< 3 symbols on page 331
is not difficult and may help.
But suppose you have a CPU that has 6 binary flags stored in it plus
3 control states. Now you have 3*64 states to write down...
Tabulating all possible states is not very helpful for machines
with structured or compound states.... unless
you allow variables and conditions to appear in the state descriptions AND
expressions in the body of the table.
The other thing missing from the table is the documentation that
explains what each transition means.
Ch 8 pp 331  Multitrack Turing Machine
Can you explain multiple track idea and its relation to the Turing Machine?
And maybe show one example in class?
Key idea: \Gamma is the Cartesian product of a finite number of finite
sets.
(Cartesian product works like a struct in C/C++)
For example: computer tape storage is multitrack tape with something like 8
or
9 bits in each cell.
You can recognise a multitrack machine by looking at the transitions
because each will have tuples as the read and write symbols.
(*,1)/(B,0)>
(*,0)/(B,1)>
(B,0)/(B,0)>
(B,1)/(B,1)>
Suppose that all of the above are on a loop from q to q in a diagram
then we have these entries in the table:
State  Next  Read  Write  Move


q  (*,1)  q  (B,0)  R

q  (*,0)  q  (B,1)  R

q  (B,1)  q  (B,1)  R

q  (B,0)  q  (B,0)  R

Both have the effect of taking a marked (*) bits and flipping them.
Exercise: complete the \delta descriptions that fit with the above:
 ...
 \delta(q, (*,i)) = (___________,_________, R),
 \delta(q, (_____,i)) = (B,i, R),
 ...
Note the book is using "[]" in place of '()' for tuples. Other
books use '<>'!
In the description of \Sigma does not mention the encoding of
"c" as [B,c].
In example 8.7, i had problem understanding the transition functions, and
the duty or assignment for each part in the function.
Can you explain, what each part of the transition function is supposed to
mean or do?
Ch 8 pp 335  Subroutines
Would you please explain a little bit more on figure 8.14 and figure 8.15,
on how the subroutine Copy works?