Skip to main contentCal State San Bernardino
>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [CS546/646] >> 17 [Text Version]
[About] [Contact] [Grades] [Projects] [Submit] [Schedule] [Syllabus] [Search ]
Session: [01] [02] [03] [04] [05] [06] [07] [08] [09] [10] [11] [12] [13] [14] [15] [16] (17) [18] [19] [20]
Thu Jun 8 13:03:02 PDT 2006




      Date Meet'g Study(2 pts) Bring(5 pts) Topic(5 pts) Notes
      Wed May 24 16 Chapter 10 sections 10.4, 10.5 Ex Graph Problems & TSP
      Mon May 29 - - - HOLIDAY
      Wed May 31 17 Chapter 11 sections 11.1, 11.2, 11.3 Ex Co-NP & PSPACE
      Mon Jun 5 18 Chapter 11 sections 11.4 Ex Randomization RP ZPP

      Co-NP and PSPACE

      1. PS::=Languages/Problems that be solved in polynomial space.
      2. PSPACE::=what the book calls PS.
      3. Co_NP::={L:Language. complement(L) is in NP}.

        11 Additional Classes of Problems 469

          11.1 Complements of Languages in NP 470

            It is easier to guess positive examples and check them (problems in NP) than it is to to reject ALL possible examples (problems in co-NP).

            Exercise: draw Figure 11.1 from memory and then review your answer with the book.

            Ch 11.1 pp 470 -- Complements of Languages in NP
            What does it mean for a language to be closed under complementation?

            Not much! We are actually talking about classes of languages, not just a single language. Each language has it's complement:

          1. complement(L)::= the set of strings that are not in L.

            A class of languages is closed under complement if applying the complement operation gives us another language in the same class. For C:class_of_languages closed_under_complement(C)::= for all L:C(complement(L) in C).

            Ch 11 pp 470 -- Co-NP
            I understand that Co-NP is the complement of NP, but I was confused when the book said that if P = NP then Co-NP and NP are the same. Can you explain that.

            Common mistake: Co-NP is not the complement of NP! It is the collection of languages that complement languages in NP.

            Now the class of P languages is closed under complement. So if we could say

          2. P = co-P.

            So if P=NP then co-NP=co-P=P=NP!

            Ch 11.1 pp 470 -- co-NP
            So if P problems have polynomial run-times and NP-Complete problems have exponential run-times and above, what kind of run-times do non-P co-NP (co-NP-Complete?) problems have?

            11.1.1 The Class of Languages Co-NP 470
            Example 11.1 SAT, USAT, and TAUT
            11.1.2 NP-Complete Problems and Co-NP 471
            Theorem 11.2 NP=co_NP iff for some NP-complete L, complement(L) is NP
            The labels (Only-if) and (If) are switched in the proof (I think).

            Structure of Proof

          3. (above)|- (if): If NP=co_NP then for some NP-complete L, complement(L) is NP.
            1. NP=co_NP.
            2. (above)|-for some L, L is NP_complete,
            3. (ei, def)|-L is in NP,
            4. (hyp)|-L is in co-NP.
            5. (def)|-complement(L) is in NP.

            (Close Let )

          4. (above)|- (only_if): If some NP-complete L, complement(L) is NP then NP=co-NP.
            1. NP-complete L, complement(L) is NP.
            2. ...
            3. NP = co-NP iff NP in co-NP and co-NP in NP.

              You prove sets equal (A=B) in two steps (A in B) and (B in A).

              Prove NP in co-NP.

              1. L:NP,
              2. ...
              3. L in co-NP

              (Close Let )

              Prove co-NP in NP.

              1. L:co-NP,
              2. ...
              3. L in NP

              (Close Let )

            (Close Let )
            Diagram showing P, NP, co-NP, .....
            [ Complexity2.gif ]

            Note: Exercise 11.1.2 below gives us an example of a language that is in both NP and co-NP but not P -- given that a function like f exists. Locate the place in the above figure where you should put this problem.

            11.1.3 Exercises for Section 11.1 472
              ! Exercise 11.1.1 NP? complement! NP-complete?...
              I suggest you first try the *a exercise and check your answer against the books answer.

              An exercise are best done like this:

              1. Write down, very precisely the complement of the problem.
              2. Do one of the following
                1. Find an NP solution to the given problem.

                (End of Case)
                1. Find an NP solution to the complement of the given problem.

                (Close Case )
              3. Do one of the following:
                1. Prove the given problem NP-complete (reduction).
                2. Conclude that it is likely that the complement is not NP.

                (End of Case)
                1. Prove the complement to be NP-complete.
                2. Conclude the given problem is unlikely to be NP.

                (End of Case)
                1. State your best guess as to which if any is NP-complete without proof and loose a point or two.

                (Close Case )

              It is wise to pick an exercise where you have a feeling about what is reduced to what.
              1. TRUE-SAT
              2. FALSE-SAT
              3. DOUBLE-SAT
              4. NEAR-TAUT

              ! Exercise 11.1.2 trapdoor functions
              A trapdoor function can be evaluated quickly, but inverting it is a slow process. These are clearly useful in constructing encryption schemes... if they exist. Here they provide a counter example to the idea that a NP and co-NP problem has to be in P.

          11.2 Problems Solvable in Polynomial Space 473

            Ch 11.2 pp 473-478 -- Importance of Polynomial Space
            Is it better to have an algorithm that takes n-time and 2^n-space or n-space and 2^n-time to complete?

            iIt depends on the particular project/application. Think of a project or applicayion where you want it fast but can pay for more storage? Think up a project were memory is at a premium but the time to spare.

            11.2.1 Polynomial-Space Turing Machines 473
            Errata in footnote to page 473
            The last occurrence of "time" on the page should be "space".
            Ch 11.2 pp 476 -- PS in polynomial space
            What is the difference between Deterministic and Nondeterministic polynomial space? This was unclear to me.

            Ch 11.2.3 pp 476-478 -- Polynomial Space
            This section discusses deterministic and nondeterministic polynomial space. What is the difference between these polynomial space and the regular deterministc and nondeterminisic turing machines that we learn in Chapter 9.

            The only thing special about a space-bound TM and a normal one is that we can calculate how tape it needs for a given size of input. It can be either deterministic or non-deterministic -- just like an unbound TM.

            11.2.2 Relationship of PS and NPS to Previously Defined Classes 474
            Ch 11.2 pp 474 -- Theorem 11.3
            Could you please go through Theorem 11.3.
            Ch 11.2.2 pp 474-475 -- Theorem 11.3
            Can you explain the calculation that produces c^(1 +p(n)) in theorem 11.3.

            Theorem 11.3 PSPACE TM is EXP TIME TM


            1. (given)|-M is a polynomial-space-bounded TM with bound p(n).
            2. (goal): For some c, for all words w accepted with length n ( number of steps is bounded by c^(1+p(n)).

              Proof is a pumping lemma style proof. If it takes too long to accept a word of length n then the ID must repeat and so there must be a shorter accepting sequence.

              Key thought: if word w has n symbols then the tape available is p(n) long.

              So how many different tapes are there with p(n) symbols each with t =|\Gamma| possibilities.

              Answer: t^(p(n)).

              Where can the head be? Anywhere in the p(n) tape symbols.

              And assume: s= |Q| -- the number of states.

              Then only s*p(n)* t^(p(n)) possible IDs.

              Trick: set c=s+t and look at c^(1+p(n)).

              Prove it is > s*p(n)* t^(p(n)).

              Hence within c^(1+p(n)) steps an ID must repeat. So it either accepts earlier or not all in p(n) space.

            (Close Let )
            Ch 11 pp -- Factorial Time P-Space Possible?
            The c^(1+p(n)) term suggests that all things in PS/NPS can be completed in exponential time; does that mean that any algorithms which require factorial time are not in PS/NPS? Or is there a way to express factorials in terms of exponentials?


            Theorem 11.4 PSPACE problem is EXP TIME
            11.2.3 Deterministic and Nondeterministic Polynomial Space 476
            Boolean reach(ID I, ID J, int m)
            Theorem 11.5 Savitch's Theorem: PSPACE = NPSPACE

          11.3 A Problem That Is Complete for PS 478

            Ch 11.3 PS-complete like NP-complete
            No one has ever been able to solve an NP-complete problem Is PS-complete the same?

            The problem with NP-complete problems is that they are solvable but they take a lot more time than you might at first expect. Further if we can find an fast solution to any of them then we have an fast solution to them all.

            PS-complete problems are similar. If we could solve a PS-complete problem in polynomial time than all PS problems can solved in polynomial time.

            11.3.1 PS-Completeness 478
            Definition of PS-complete
            Theorem 11.6 Collapsing hierarchies
            11.3.2 Quantified Boolean Formulas 479
          1. QBF::=quantified boolean formulas.

            QBF Syntax
          2. QBF::= "0" | "1" | "not" QBF | QBF operator QBF | "("quantifier variable")" "(" QBF ")".
          3. operator::= "^" | "\/".
          4. quantifier::= \all | \some.

            Example 11.7 of a QBF
            The meaning behind the expressions (for all x) (E) and (there exists x) (E) confuses me. They seem so very similar. Can you explain where the differences lie?

            QBF Semantics
            Define an evaluation function mapping each QBF into boolean values:
          5. eval::QBF->{0,1} where following,
            1. eval(0)::=0.
            2. eval(1)::=1.
            3. For QBF E, eval(not E)::= not eval(E).
            4. For QBF E1, QBF E2, eval(E1 ^ E2)::= eval(E1) and eval(E2).
            5. For QBF E1, QBF E2, eval(E1 \/ E2)::= eval(E1) or eval(E2).
            6. For QBF E, variable x, eval( (\all x)(E) ) ::= eval(E[x:=0]) and eval(E[x:=1]). Note1
            7. For QBF E, variable x, eval( (\some x)(E) ) ::= eval(E[x:=0]) or eval(E[x:=1]).

            (End of Net)

            11.3.3 Evaluating Quantified Boolean Formulas 480

            Ch 11 pp 479 -- Quantified Boolean formulas
            Example 11.8

            Evaluate QBF by induction/recursion.

            11.3.4 PS-Completeness of the QBF Problem 482
            Theorem 11.10 -- QBF is in PSA
            Show how to handle the 6 different syntactic forms of QBF. Completed in Exercise 11.3.5 below.
            Theorem 11.11 -- QBF is PS-complete
            Construct a polynomial time computable map from any given PS problem into QBF.

            Express a PS TM using booleans! The idea shouldn't be too surprising. Proving that it can be done took some ingenuity.

            Structure of Proof

            1. Encode IDs as boolean variables.
            2. Abreviation for quantifying over all variables in an ID.
            3. Formula = (\some start)(\some finish)(Starts ^ Next ^ Finishes).
            4. The QBF for Next takes a clever gimmick for dividing and conquering without doubling the size of the formula. YAGNI.

            Diagram showing PSPACE, P, NP, co-NP, .....
            [ Complexity3.gif ]
            11.3.5 Exercises for Section 11.3 487
              Exercise 11.3.1 Complete the proof of theorem 11.10
              *!! Exercise 11.3.2 Universal Regular expressions
              This is in GJ75.
              !! 11.3.3 The Shannon Switching Game
              GJ75 remark that many games have solutions that are expressed as QBF formula:
            1. For some winning move, for all replies from my opponent, there is a winning move, ....

      . . . . . . . . . ( end of section 17) <<Contents | End>>

      Next -- Randomization

      [ 18.html ]


      (Note1): The Notation
    1. E[x:=v] stands for
    2. E with all occurrences of x replaced by value v.

      (Ex): Do as many of the relevant exercises as you have time for. You may work in a team of up to 4 students and hand in one joint solution. Bring to class one written solution to an exercise. This must not be a solution to an exercise marked with an asterisk(*) to earn full credit. One of the authors will be invited to present the solution to the class -- failure will loose points. Students taking CS646 must hand in the solution to an exercise marked with an exclamation mark(!) to earn full credit.
      (Study): Read & think about the assigned items. Submit one question by selecting the submit button at the top of the web page. To earn complete credit you need to do this at least 90 minutes before the start of class. Hints. Read each section twice -- once the easy bits and then the tough bits. Use a scratch pad and pencil as you read.
      (Topic): To earn all the possible points you must: turn up on time, not leave early, present work when called upon, and take part in discussions.

    3. YAGNI::="You Ain't Gonna Need It", -- at least, not in the final!

      Standard Definitions

    4. FSA::="Finite State Acceptor/Automata", a theoretical machine with a finite set of states.
    5. ND::="Non-deterministic", a machine that can try out many possibilities and always guesses the right choice. Note: For a given problem a ND solution is simpler and easier to understand than a deterministic one. We study ND machines to discover how to use them as high-level designs for more complicated deterministic machines. Plus it's fun.
    6. PDA::="Push down Automata", a machine with a finite control and a single stack for remembering intermediate data. Kind of like half a TM.
    7. RJB::=The author of this document, [ ../index.html ]
    8. |-RJB="Richard J Botting, Comp Sci Dept, CSUSB".
    9. Schedule::= See
    10. Search::= See
    11. Syllabus::= See, and also see [ syllabus.html ] (my generic syllabus).
    12. TBA::="To Be Announced".
    13. TM::="Turing Machine".
    14. NTM::="Nondeterministic Turing Machine".
    15. nondeterministic::="A machine that can make choices as it runs and accepts an input iff any sequence of choices leads to acceptance."
    16. DTM::="Deterministic Turing Machine".
    17. runtime::=run_time,
    18. run_time::="The number of steps a TM takes before it halts when start on a given input".
    19. complexity::="The largest run_time for any word with a given length" , usually expressed as an assymptotic (Big-O) formula.
    20. P::@problem=class of problems that a DTM can compute in polynomial_time.
    21. NP::@problem=class of problems that a NTM can compute in polynomial_time.
    22. polynomial_time::=An algorithm/TM is said to halt in polynomial time if the number of steps given n items of data is less than O(n^c) for some constant c.

      From Logic

    23. LOGIC::= See

    24. (LOGIC)|- (ei): Existential instantiation -- if P is true of something then we can substitute a new variable for it.
    25. (LOGIC)|- (eg): existential generalization -- if P is true of this thing then it is true of something!
    26. (LOGIC)|- (given): a proposition that is assumed as part of a Let...End Let argument. Typically if A and B and C then D starts with assuming A,B, and C are given. They are called the hypotheses.
    27. (LOGIC)|- (goal): a proposition that we wish to prove. Typically the part after the then in an if...then... result. This is called the conclusion.
    28. (LOGIC)|- (def): By definition of a term defined above.
    29. (LOGIC)|- (algebra): Either by using well known algebraic results, or by use a series of algebraic steps.
    30. (LOGIC)|- (RAA): Reduce to absurdity. The end of a Let/Po/Case/Net that establishes the negation of the last set of assumptions/hypotheses.