[Skip Navigation] [CSUSB] / [CNS] / [Comp Sci Dept] / [R J Botting] >> [CS546/646] >> 05 [Text Version]
[About] [Contact] [Grades] [Projects] [Submit] [Schedule] [Syllabus] [Search ]
Session: [01] [02] [03] [04] (05) [06] [07] [08] [09] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
Wed Jan 30 10:27:38 PST 2008


    Session 05 -- introducing the Turing Machine


      Date Meet'g Study(2 pts) Bring(5 pts) Topic(5 pts) Notes
      Previous [ 04.html ] Chapter 2+Chapter 6 section 6.1 Ex Automata
      Today 5 Chapter 8 sections 8.1, 8.2 Ex Turing Machines
      Next [ 06.html ] Chapter 8 sections 8.3, 8.4 Ex Programming TM

      Chapter 8 -- Introduction to Turing Machines

        Ch ch.8 Why do we need to prove everyday questions undecidable or intractable?

        Top ten reasons?

        • 10. It's fun proving difficult results... well it feels good when you stop with a completed proof.
        • 9. Because I say so.
        • 8. Basic Brain Training. No pain, no gain.
        • 7. You don't want other people to know more than you do.
        • 6. "Why did you climb the mountain?" "Because it was there!"
        • 5. Interviews!
        • 4. Once text book proofs have been mastered, you can prove the difficulty of problems pretty quickly and informally -- and be right.
        • 3. You might become rich and famous.
        • 2. To impress your peers.
        • 1. You don't want to be stuck doing an impossible job.

        Section 8.1 Problems Computers cannot Solve

          8.1.1 Hello World

          8.1.2 Testing for a Hello World Program

          8.1.3 Reducing one problem to another

          A special kind of Reducto Ad Absurdum argument. The general form is:
          1. Hypothesis that X exists
          2. ...
          3. Impossibility

          (Close Let )
          Conclude: There is no such X.

          Ch 8.1 pp 316 -- Direction of Reduction

          According to the "Direction of Reduction" box at the top of page 316, if we are trying to find out if P2 is undecidable, we need to reduce a known undecidable program like P1 to P2. But example 8.1 on page 314 seems to do the opposite, reducing the foo function program to the "hello world" program. Could you please explain this?

          You are not alone in feeling that these "reductions" go in the wrong direction -- the box is there because of this confusion. In Example 8.1 the formal structure of the proof is:
          (P2): given any program Q and input y decide if Q ever calls function "foo()".
          (P1): given any program Q and input y decide if Q outputs "hello world".

          We know that P1 can not be solved, and we use this as below:

          1. A program X exist that solves problem P2.
          2. We can write a program T that takes a program Q and changes it into Q' so instead of outputting "hello, world" Q' calls function "foo()"...
          3. So if we feed Q' into X, X will tell us whether Q outputs "hello, world" or not.
          4. So combining T and X we get a program that solves P1 above.
          5. Which is impossible.

          (Close Let )
          We conclude: No program can solve P2.

          8.1.4 Exercises

          Hint: draw a picture!

          Ch 3, ch8 pp 316-316 -- Exercise 8.1.1

          I'm having a hard time understanding the procedure used in the exercise for 8.1. Could you go over how to tackle reduction problems like this one?

          I attempted exercise 8.1.1.b in class and found my notes were not formal enough for me to reconstruct my thinking. My solution is like this

          1. there exists a program P2 that reads in a program and its input and decides if it produces any output.
          2. We already know that we can not have a program that decides if another program outputs "hello, world" or not.
          3. But we can write a program to edit another program so that never outputs anything but "hello, world". In other words, we remove all outputs that don't produce "hello, world".

          4. This program can be used to preprocess programs ready for P2.

          5. The combination will then be a solution to the P1 "hello, world" problem.
          6. But this does not exist,

          (Close Let )
        1. Thus there is no program to test output vs no output.

        Section 8.2 The Turing Machine

          8.2.1 The Quest to Decide All Math Questions

          8.2.2 Notation for a Turing Machine

          Ch 8.2.2 pp 319 -- Notation of Turing Machines

          In this section they said the Turing Machine was similar to the finite automata. What is the major difference between the two and why is the Turing Machine better?

          Exercise for the class: Discuss, using the book. One sentence answer.

          8.2.2 Review Formal Definition

        1. Turing_Machine M::=(Q,Σ,Γ, δ, q0, B, F) where
          1. Q: __________
          2. Σ: __________
          3. Γ: __________
          4. δ: __________
          5. q0: __________
          6. B: __________
          7. F: __________

          (End of Net)

          Fill in the blanks above without looking in the book.

          Then check your answers.

          Ch 8.2 pp 318-321 -- BLANK

          Is the Blank tape symbol an actual symbol written on the tape or is it just literally blank space to the right or left of the tape that can be written on?

          Turing's Definition of a TM

          In Turing's paper and in early papers and books the transition function is described as being defined in a table of quintuples. Here is the example on page 322, Figure 8.9 encoded like this:

          You can encode the machine like this: [ 05.tm ] (state 4 is now my H "halting" state).

          Emulating Turing Machines in C

          I've written a emulator that handles TM's like this. Here [ ../src/misc/tm.c ] is a C program that simulates a small Turing machine. And here: [ ../src/misc/test.tm ] [ ../src/misc/tm.tm ] are a couple of sample files containing this kind of TM.

          8.2.3 Instantaneous Descriptions of Turing Machines

          Ch 8 pp 320-321 -- Introduction to Turing Machines

          I am having trouble understanding IDs. Can you please go over how IDs are represented in TMs? Examples would help.

        2. ID::=left_hand_string state O(head_symbol right_hand_string).
          means the machine is in state q and looking at symbol "d". To the left is "abc" and to the right of the head is "ef".
          means that the machine is looking at the first of the right hand blanks.
          means that the machine is look at c and this is the left-most non-blank symbol.

          Ch 8 pp 318-319 -- Jump/Branch

          Can Turing Machines jump or branch like assembly programs?

          One way to encode a finite state controller is to use a gotos like this:

           		q0: if(tape.head=='1') { tape.write('X'); tape.goRight(); goto q1; }
           		    if(tape.head=='Y') { tape.write('Y'); tape.goRight(); goto q3; }

          I think you can say that a TM is nothing but goto statements:-) TMs pre-date the structured revolution by about 30 years.

          8.2.4 Transition Diagrams for Turing Machines

          Ch 8 pp 325 -- Transition Diagrams for Turing Machine

          How come there is no accepting state in figure 8.12?

          This machine is designed to implement a function rather than accept a language. So it doesn't have an accepting state.

          8.2.5 Language of a Turing Machine

          Ch 8.2 pp 322-5 -- The Turing Machine

          In transition function tables, why is '-' being used to denote both when a TM accepts and dies? Are the accepting states always the rows with ONLY dashes in them?

          A Finite State Acceptor reports on string as it reads it -- each read it either accepts or rejects what it has seen so far. A TM reports once on the whole string it has been given and then halts. A TM does not accept until it halts.

          Later we will worry about rejecting inputs by never halting.

          8.2.6 Turing Machines and Halting

          Ch 8.2 pp 327 -- When does a Turing machine not halt ?

          It depends on the machine. Try this one:
          Sometimes there is a tight little loop, and sometimes a long loop with the machine wandering all over the tape writing and over-writing it.

          Ch 8 pp 327 -- "Recursive'

          Why are Turing machines that halt called "recursive"? When I think recursion, I think of things that happen over and over again (like a recursive loop), and I don't see any implicit guarantee that it won't happen again in that word. Or is it in reference to the necessity of a base case of a recursive algorithm?

          The word "recursive" is historical: Mathematicians were working on the theory of recursive functions just before and independently of Alan Turing's work. It turns out that what they called "Recursive Functions" are precisely those that can be computed by a TM that always halts. We will spend more time on this in class 09 using the Web too fill in the gaps in the text.

          Meanwhile: a machine (that halts) is not recursive, but the existence of the machine proves that the problem that it solves is recursive.

          8.2.7 Exercises


            When asked for the IDs you need to trace the progress of the TM: q00|-Xr0 |- X0r |- Halts.
            Be careful to always always always look to the RIGHT of the state symbol not the left. The TM only sees the symbol to the RIGHT.

        Ch 8 pp 331 -- Multitrack Turing Machine

        Can you explain multiple track idea and its relation to the Turing Machine? And maybe show one example in class? Thanks.

        Yes, but next time! [ 06.html ]

        Ch 8.1 and 8.2 pp -- Deadline for class 5

        Fewer points after this message is delivered.

    . . . . . . . . . ( end of section Session 05 -- introducing the Turing Machine) <<Contents | End>>

    Standard Definitions

  1. FSA::="Finite State Acceptor/Automata", a theoretical machine with a finite set of states.
  2. ND::="Non-deterministic", a machine that can try out many possibilities and always guesses the right choice. Note: For a given problem a ND solution is simpler and easier to understand than a deterministic one. We study ND machines to discover how to use them as high-level designs for more complicated deterministic machines. Plus it's fun.
  3. PDA::="Push down Automata", a machine with a finite control and a single stack for remembering intermediate data. Kind of like half a TM.
  4. RJB::=The author of this document, [ ../index.html ]
  5. |-RJB="Richard J Botting, Comp Sci Dept, CSUSB".
  6. Schedule::= See http://csci.csusb.edu/dick/cs546/schedule.html.
  7. Search::= See http://www.csci.csusb.edu/dick/cs546/lookup.php.
  8. Syllabus::= See http://csci.csusb.edu/dick/cs546/syllabus.html, and also see [ syllabus.html ] (my generic syllabus).
  9. TBA::="To Be Announced".
  10. TM::="Turing Machine".
  11. NTM::="Nondeterministic Turing Machine".
  12. nondeterministic::="A machine that can make choices as it runs and accepts an input iff any sequence of choices leads to acceptance."
  13. DTM::="Deterministic Turing Machine".
  14. runtime::=run_time,
  15. run_time::="The number of steps a TM takes before it halts when start on a given input".
  16. Recursive::="Languages that can be recognised (instances accepted or rejected) in a finite runtime".
  17. RE::=Recursive_enumerable,
  18. Recursive_enumerable::="Languages that are accepted in a finite time by a TM, but may be rejected by the TM failing to halt."
  19. complexity::="The largest run_time for any word with a given length" , usually expressed as an assymptotic (Big-O) formula.
  20. P::@problem=class of problems that a DTM can compute in polynomial_time.
  21. NP::@problem=class of problems that a NTM can compute in polynomial_time.
  22. polynomial_time::=An algorithm/TM is said to halt in polynomial time if the number of steps given n items of data is less than O(n^c) for some constant c.

    From Logic

  23. LOGIC::= See http://www.csci.csusb.edu/dick/maths/logic_25_Proofs.html

  24. (LOGIC)|- (ei): Existential instantiation -- if P is true of something then we can substitute a new variable for it.
  25. (LOGIC)|- (eg): existential generalization -- if P is true of this thing then it is true of something!
  26. (LOGIC)|- (given): a proposition that is assumed as part of a problem statement.
  27. (LOGIC)|- (let): a proposition that is assumed as part of a Let...End Let argument. Typically if A and B and C then D starts with assuming A,B, and C are given. They are called the hypotheses.
  28. (LOGIC)|- (goal): a proposition that we wish to prove. Typically the part after the then in an if...then... result. This is called the conclusion.
  29. (LOGIC)|- (def): By definition of a term defined above.
  30. (LOGIC)|- (algebra): Either by using well known algebraic results, or by use a series of algebraic steps.
  31. (LOGIC)|- (RAA): Reduce to absurdity. The end of a Let/Po/Case/Net that establishes the negation of the last set of assumptions/hypotheses.