Skip to main contentCal State San Bernardino
>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [CS546/646] >> 04 [Text Version]
[About] [Contact] [Grades] [Projects] [Submit] [Schedule] [Syllabus] [Search ]
Session: [01] [02] [03] (04) [05] [06] [07] [08] [09] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
Thu Jun 8 13:02:49 PDT 2006


    Session 4 -- Review of Automata


      Date # Study(2 pts) Bring(5 pts) Topic(5 pts) Notes
      Previous 3 Preface+Web1 URL History Exam on math(50 pts)
      Today 4 Chapter 2+Chapter 6 section 6.1 Ex Automata
      Next 5 Chapter 8 sections 8.1, 8.2 Ex Turing Machines

      Chapter 2 Finite Automata

        2.1 Informal Picture

        Ch 2 pp 45 -- What exactly is a Product Automaton?

        Is it different from a DFA or is it just another way to say DFA?

        "Product Automata" is not another way to say DFA!

        A product automaton is the result of combining other automata. Like the product of 3 and 4 is 12... Indeed the number of states in the product automata is equal to the product of the number of states in the separate automata. For example the product of states{1,2,3} with {a,b,c,d} is {(1,a), (2,a), (2,a),(3,a), (1,b),....(3,d)} with 12 states. These are best diagrammed in a grid:

        Whenever you see a problem that reads something like this

         	Accept strings with an even number of 1'a and an odd number of zeroes.
        you should think about designing two automata: one for each part of the problem and then calculating the product.

        Exercise can you find the example later in the book and the exercise that needs a product automata?

        You can't take this kind of product of two automata unless they have the same input alphabet.

        The product of two automata (Q1,Sigma,delta1,q1,F1) and (Q2,Sigma,delta2, q1, F2) is constructed like this:

        • Take the Cartesian product of the state spaces: Q12 = Q1><Q2,
        • The typical state is (x,y) where x is in Q1 and y is in Q2.
        • The product automata has the same set of input symbols as each automata.
        • The transitions of the product combine the separate transitions: if the first goes from a to b with input c, and the second from d to e then the product goes from (a,d) +> (b,e).
        • ((x,y),a) +> (delta1(x,a), delta2(y,a)).
        • Initial state is the pair of initial states. (q1,q2)
        • The final states are those pairs (x,y) such that both x and y are accepted in their own machine. F1><F2.
        • OR perhaps the product can accept when any component accepts (diagram on board...)

        2.2 DFAs -- Deterministic Finite Automata

        Request for a lecture

        I don't understand deterministic finite automata especially processing strings,and everything else. Dr. Botting it will really help if you lecture on ch. 2.

        I'll prepare a short lecture+activities.

        • We are modeling machines that have a memory of the past.
        • The memory is modeled by a set of states.
        • The state changes: when an input appears.
        • The change of state depends on the input.
        • We can tabulate how each input effects the state.
        • We talk of a string of inputs... one after another.
        • Automata classify the strings as accepted or rejected.
        • (General machines can have outputs other than yes/no).
        • The accept/reject is based solely on the state of the memory.

        In this subject it pays to take careful note (on paper, in a file, ...) of all the formal definitions. An example is on page 46.

        Exercise: With out looking in the book write down the formal definition of a deterministic finite automaton:

      1. A = (...??...) where
        1. ...

        (End of Net)

        Now check your answer with the book.

        We made a study of all the binary automata: Q={p,q}, Sigma={0,1}, q0=p. They differ in the choice of F and delta. If F={} the language is {}. If F={p,q} then the language is Sigma* (all strings). The interesting automata are when F={p} or F={q}. We noticed that changing F from {p} to the {q} and vice versa complements the language. So just consider F={p} --- so that the empty string epsilon is accepted. Then we went to study the effect of choosing delta. If delta(p,0)=p and delta(p,1)=q then the language is Sigma* (again). The rest tended to be either parity machines: (accept even strings, strings with even 0s, strings with even 1s), or latches: reject the first 1, reject the first 1, reject any string with a 1, any string with a zero, reject all strings with more than 1 symbol.

        Here [ auto1.cpp ] is the example from section 2.2 coded up as directly as possible in C++ by abuse(?) of the STL.

        Ch 2.2.2 pp 46-47 -- DFA

        Example 2.1 states: {w | w is of form x01y for some strings x and y consisting of 0's and 1's ONLY} then it states: strings in the language include 01. Is that true? If true doesn't that mean x and y are empty strings ? If they are empty strings shouldn't it be noted in the language ?

        Just because "01" is accepted doesn't mean that "00000001000000" is rejected.

        Further, the empty string is just another string (when a theorist is talking).

        By the way: when you program -- make sure your code handles the null cases gracefully!

        Ch 2 pp 53-53 -- Exercise 2.2.1 - Transition Table

        I'm confused about the answer to Exercise 2.2.1a and how the transition table is derived. Part a is a starred exercise (*) and the answer is on the following link: [ sol2.html ] Why are some of the rows repeated where the only difference between them is an 'accepted' or 'rejected' symbol? Rows 1 to 2, and 4 to 5, are some of the rows that I am confused about.

        The DFA in this book have no real output. Just some states are labeled "accept" and the rest "reject". So to show where a marble comes out of the machine you have to code this by attaching it to a state -- not to a transition. Now, the position of the levers is not enough to predict where the marble emerges because it also depends on where the marble went in.... so you include in the state both the position of the levers and what happened! This leads to both r and a states.

        • It gave me pause for thought when I was trying the exercise before going to the site.
        • I also miscalculated 6 or 7 transitions.... :-(

        2.3 NFA -- Nondeterministic Finite Automata

        Ch 2 pp 58-59 -- The Extended Transition Function

        Page 58 has an equation using a large "U" with i=1..k. Is this "U" the union symbol used on the next page?

        Yes. The Big U is for Union. It is actually a big "CUP". In the same way that Sigma does a whole series of plusses, the big U does a whole series of set unions.

        Ch 2 pp 49-58 -- extended transition function

        Would you please explain a little bit more the difference between the DFA and NFA's "extended transition function"?

        The DFA extende delta-hat function represents a single path thru the states. The NFA version tracks all the possible paths.

        Ch 2.3 pp 66 -- Pigeonhole Principle

        Can we prove the pigeonhole principle by mathematical induction?

        I don't know and it is not important for this class.

        2.4 Application - grep

        Ch 2 pp 69-70 -- Finite Automata

        Which is a more effective when it comes to doing a text search for keywords, using an NFA or a DFA?

        We always end up coding a DFA because our hardware is deterministic. But we use NFA as a high-level design. In the past I taught a programming methodology that design all programs as nondeterministic (just leave out the conditions!) and resolve the "recognition difficulties" by back-tracking and/or reading-ahead.

        2.5 epsilon-transitions

        Ch 2 pp -- Epsilon Transitions

        I have heard epsilon transitions called "lambda" transitions. Is there a difference or was it just the preference of the instructor?

        It is the choice of the writer whether they us \lambda or \epsilon to represent the empty string. Computer scientists and logicians use \lambda for a totally different purpose and so \epsilon for empty is what they use. Grammar and language theorists tend to use \lambda.

        Ch 2 pp 72-79 -- Epsilon Transitions

        Could you please go over Epsilon Transitions in general? Epsilon Closures are confusing and I don't really understand their purpose. (Actually I haven't taken CSCI 500 so about the last half of chapter 2 is new material to me.)

        I think we can skip them for now. Essentially they allow the finite machine to do an internal computation without reading any data. Useful is designing solutions to some problems.... but not essential since we can always get rid of them. We will omit the proof via \epsilon closures.

      Chapter 6 Section 6.1 Push Down Automata

        Ch 6 pp 221-222 -- PDA

        When giving the formal definition of a push-down automata, the book doesn't mention anything about what happens if lambda = epsilon but the stack X is empty. Would that be considered an error, or would the push-down automata continue happily along?

        I'm not sure what lambda is here.... but normally if a PDA attempts to pop an empty stack it stops and fails. In other words, a nondeterministic PDA, the branches that lead to popping an empty stack die.

        Sometimes PDAs are defined without a set of accepting states. They accept a string by emptying their stack (and stopping!).

        In other words.... this may not be a bug, but a feature:-)

        Ch 6 pp 219-222 What is the function of a pushdown automata?

        (1) the missing link between finite automata and Turing Machines.

        (2) They model several important algorithms in compilers and interpreters.

        (3) Providing employment to theorists in the 1950's and practitioners in the 1960's onward.

        (4) #include <stack>

        Ch 2+6 -- deadline for automata questions

        Fewer points after this is received.

        Ch 2 pp 55 -- Nondeterministic Finite Automata

        The author says that NFA are usually more succint and easier to design then DFA's. Why is that?

        A NFA can try out many options at once and keeps track of which options are open at any time. This can be done with a DFA but it has to explicitly do the book-keeping that is implicit in a NFA

    . . . . . . . . . ( end of section Session 4 -- Review of Automata) <<Contents | End>>

    Next -- Turing Machines

    [ 05.html ]

    Standard Definitions

  1. FSA::="Finite State Acceptor/Automata", a theoretical machine with a finite set of states.
  2. ND::="Non-deterministic", a machine that can try out many possibilities and always guesses the right choice. Note: For a given problem a ND solution is simpler and easier to understand than a deterministic one. We study ND machines to discover how to use them as high-level designs for more complicated deterministic machines. Plus it's fun.
  3. PDA::="Push down Automata", a machine with a finite control and a single stack for remembering intermediate data. Kind of like half a TM.
  4. RJB::=The author of this document, [ ../index.html ]
  5. |-RJB="Richard J Botting, Comp Sci Dept, CSUSB".
  6. Schedule::= See
  7. Search::= See
  8. Syllabus::= See, and also see [ syllabus.html ] (my generic syllabus).
  9. TBA::="To Be Announced".
  10. TM::="Turing Machine".
  11. NTM::="Nondeterministic Turing Machine".
  12. nondeterministic::="A machine that can make choices as it runs and accepts an input iff any sequence of choices leads to acceptance."
  13. DTM::="Deterministic Turing Machine".
  14. runtime::=run_time,
  15. run_time::="The number of steps a TM takes before it halts when start on a given input".
  16. complexity::="The largest run_time for any word with a given length" , usually expressed as an assymptotic (Big-O) formula.
  17. P::@problem=class of problems that a DTM can compute in polynomial_time.
  18. NP::@problem=class of problems that a NTM can compute in polynomial_time.
  19. polynomial_time::=An algorithm/TM is said to halt in polynomial time if the number of steps given n items of data is less than O(n^c) for some constant c.

    From Logic

  20. LOGIC::= See

  21. (LOGIC)|- (ei): Existential instantiation -- if P is true of something then we can substitute a new variable for it.
  22. (LOGIC)|- (eg): existential generalization -- if P is true of this thing then it is true of something!
  23. (LOGIC)|- (given): a proposition that is assumed as part of a Let...End Let argument. Typically if A and B and C then D starts with assuming A,B, and C are given. They are called the hypotheses.
  24. (LOGIC)|- (goal): a proposition that we wish to prove. Typically the part after the then in an if...then... result. This is called the conclusion.
  25. (LOGIC)|- (def): By definition of a term defined above.
  26. (LOGIC)|- (algebra): Either by using well known algebraic results, or by use a series of algebraic steps.
  27. (LOGIC)|- (RAA): Reduce to absurdity. The end of a Let/Po/Case/Net that establishes the negation of the last set of assumptions/hypotheses.