Skip to main contentCal State San Bernardino
>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [CS546/646] >> 20 [Text Version]
[About] [Contact] [Grades] [Projects] [Submit] [Schedule] [Syllabus] [Search ]
Session: [01] [02] [03] [04] [05] [06] [07] [08] [09] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] (20)
Mon Jun 12 16:37:47 PDT 2006
Opening the PDF files on this page may require you to download Adobe Reader or an equivalent viewer (GhostScript).

Contents


    20

      Schedule

      Date Meet Study(2 pts) Bring(5 pts) Topic(5 pts) Notes
      Mon Apr 3 1 - - Survival
      Wed Jun 7 19 Chapter 11 sections 11.5, 11.6 Ex Primality
      Mon Jun 12 20 Chapters 8, 9, 10, 11 Ex Review
      Fri Jun 16 Final Chapters 1, 2, 8, 9, 10, 11 - Comprehensive (200 pts)
      Tue Jun 20 - - - Grades Due in

      Ex

      Take the hand out from class 19 and see if you can match up what it says with what we have been studying in this course. Extract one point of similarity or an application of the theory in this class to the paper. Write it up as a single paragraph and bring to class.

      Challenge

      I just met a class of puzzles for children. You are given a number of jugs of different sizes and want to use them to measure out a particular ammount. One instance: You have jugs that contain 4, 5, and 12 cups of water. Show how to get precisely 3 cups of water by filling and pouring water between the three jugs.

      These are entertaining and sometimes challenging. (1) formulate them as a problem that could be input into a TM. (2) Are the problems decidable, recursive, NP, P, NP-complete or what?

      Description of problems in Psychology course [ lec13.htm ]

      Surprise

        I've just been told about the following paper which uses the content of this course to understand "Agents":
      1. Wooldridge M., Dunne P.
      2. The complexity of agent design problems: Determinism and history dependence
      3. Annals of Mathematics and Artificial Intelligence 45(3-4): 343-371,
      4. Abstract:
          The agent design problem is as follows: given a specification of an environment, together with a specification of a task, is it possible to construct an agent that can be guaranteed to successfully accomplish the task in the environment? In this article, we study the computational complexity of the agent design problem for tasks that are of the form "achieve this state of affairs" or "maintain this state of affairs." We consider three general formulations of these problems (in both non-deterministic and deterministic environments ) that differ in the nature of what is viewed as an "acceptable" solution: in the least restrictive formulation, no limit is placed on the number of actions an agent is allowed to perform in attempting to meet the requirements of its specified task. We show that the resulting decision problems are intractable , in the sense that these are non-recursive (but recursively enumerable ) for achievement tasks, and non-recursively enumerable for maintenance tasks. In the second formulation, the decision problem addresses the existence of agents that have satisfied their specified task within some given number of actions. Even in this more restrictive setting the resulting decision problems are either pspace-complete or np-complete. Our final formulation requires the environment to be history independent and bounded. In these cases polynomial time algorithms exist: for deterministic environments the decision problems are nl-complete; in non-deterministic environments, p-complete.

        When it comes to the example of MDG at the bottom, partitioning of G

        into n clusters. Could you please explain it?

        We'll discuss it in class.

      Review

        Table of Problem/Language classes

        Type of ProblemMachineAcceptsRejectsRunTime on n symbols
        RegularFSALL_barn
        CFGNDPDALL_barn
        PDTMLL_barO(n^c)
        NPNTMLL_barO(n^c)
        NP-completeNTMLL_barO(n^c) but worse than rest
        PS=NPSTMLL_barO(c^n) but space is O(n^s)
        RPRandom TML probablyL_bar alwaysO(n^c)
        ZPPRandom TMLL_bar O(n^c) on average
        RecursiveTMLL_barNever infinite
        DecidableTMLL_barNever infinite
        Undecidable?
        RETML?Perhaps infinite
        not RENone

        Cooke's paper

        [ http://www.claymath.org/millennium/P_vs_NP/ ] [ Official_Problem_Description.pdf ]

        The Complexity Zoo

        [ Complexity_Zoo ]

        Questions

          Why are there are so many complexity classes.

          Some seem very similar to others. Could you please explain again why there are so many complexity classes?

          Which of the problem classes are the most important to remember?

          I think the probabalilstic ones RP and ZPP are probably least valuable and R, RE, P, NP, NP-complete the most critical.

          What might we expect for final questions on chapters 1 and 2?

          Theory based, calculation based, both?

          Not much.... Chapter 1 is use thruout the rest of the course and final... Chapter 2: you need to know the definitions of the various kinds of automata and their languages -- and how they differ from TM's and their languages. Also you may have to use the definitions in an informal proof. But the real focus in this class is on TMs.

          In Q.1 of the Final Exam Review Sheet, TM and Languages, what is "Languages" referring to ?

          In complexity theory the main purpose of machines is to recognise languages. So for each type of machine a language is defined -- typically by the machine entering a particular set of states.

          Examples: [click here [socket symbol] if you can fill this hole]

          Could you give a couple simple examples of formally written 'Induction' and 'Reduction' proofs?

          First a distinction: similar names but different in all other respects.

          Induction is used to get a handle on infinite bu countable situations: numbers, strings, trees, .... Things that grow on piece at a time. The plan is to establish the truth of a few simple cases and then demonstrate that as we grow the complex objects the property remains true.

          Example of Induction All my Lego blocks are Green. So all I can construct are green objects.

          Reduction is only used in the theory of computation. You already know that some problem P1 is difficult/impossible/hard and want to show that a new problem P2 is at least as bad. The plan od attack is to show that if we solve P2 then it is easy to solve P1.... but P1 is hard/impossible so must P2 be.

          Example of reduction -- this comes from the paper I'm reviewing above: Reducing the existence problem for an agent to the construction problem for an agent.

          The problem of constructing an ajent must be harder then checking for an agent existence, because construction prove existence.

          Can you go over an example of a NP-Complete problem?

          TSP? Vertex Cover?

          Minesweeper!

          Can you go over the reduction of NP-Complete problems?

          1. Minesweeper is NP
          2. Any instance of SAT can be expressed as a Minesweeper game!
            Proof
            1. Encode boolean variables as a pair of unknpown squares surrounded by 1s.
            2. Encode 'wires' connecting different occurences of variables.
            3. Encode a NOT gate...
            4. Encode an AND gate
            5. Therefore can encode any boolean expression.
            6. Choosing the correct squares in each pair expresses SAT.

          L[d]

          Could you please go over the language L[d] again? From my understanding it is the set of strings that won't make valid TMs. However, there are several proofs that use a TM for L[d]?

          L[d] is the "Diagonalization Language" defined in section 9.1.3 (pp370-372) which plays aa critical part in section 9.1.4 and the proof of 9.2. [ 10.html#9.1.3 The Diagonalization Language 370 ] and [ 10.html#L_d ]

          It is not the set of strings that don't make TMs.... but something more subtle. It is the language made up strings that describe TM's with a special (weird) property: they refuse to accept their own description as an input string.

          Perhaps L_d should be called "the Knockturn Alley" language?

          Here is an analogy: suppose I founded a new library that contains nothing but library catalogs -- it might will include on its shelves it's own catalog. So of course we would have to include the library's catalog in its own catalog. In fact we could imagine collecting a list of libraries that have put their own catalog on their shelves. And perhaps more interesting put on an exhibit of catalogs of libraries that don't have their own catalog listed in their catalog.

          The problem starts if we decide we want a library of catalogs that don't list their own catalogs.... do we list this libraries catalog or not?

        . . . . . . . . . ( end of section Questions) <<Contents | End>>

      . . . . . . . . . ( end of section Review) <<Contents | End>>

    . . . . . . . . . ( end of section 20) <<Contents | End>>

    Next

    The comprehensive final on FRIDAY June 16. Let me know if this is impossible -- SOON.

    Notes


    (Study): Read & think about the Course. What questions do you need answered? Submit one question by selecting the submit button at the top of the web page. To earn complete credit you need to do this at least 90 minutes before the start of class.
    (Topic): To earn all the possible points you must: turn up on time, not leave early, present work when called upon, and take part in discussions.

    Standard Definitions

  1. FSA::="Finite State Acceptor/Automata", a theoretical machine with a finite set of states.
  2. ND::="Non-deterministic", a machine that can try out many possibilities and always guesses the right choice. Note: For a given problem a ND solution is simpler and easier to understand than a deterministic one. We study ND machines to discover how to use them as high-level designs for more complicated deterministic machines. Plus it's fun.
  3. PDA::="Push down Automata", a machine with a finite control and a single stack for remembering intermediate data. Kind of like half a TM.
  4. RJB::=The author of this document, [ ../index.html ]
  5. |-RJB="Richard J Botting, Comp Sci Dept, CSUSB".
  6. Schedule::= See http://www.csci.csusb.edu/dick/cs546/schedule.html.
  7. Search::= See http://www.csci.csusb.edu/dick/cs546/lookup.php.
  8. Syllabus::= See http://www.csci.csusb.edu/dick/cs546/syllabus.html, and also see [ syllabus.html ] (my generic syllabus).
  9. TBA::="To Be Announced".
  10. TM::="Turing Machine".
  11. NTM::="Nondeterministic Turing Machine".
  12. nondeterministic::="A machine that can make choices as it runs and accepts an input iff any sequence of choices leads to acceptance."
  13. DTM::="Deterministic Turing Machine".
  14. runtime::=run_time,
  15. run_time::="The number of steps a TM takes before it halts when start on a given input".
  16. Recursive::="Languages that can be recognised (instances accepted or rejected) in a finite runtime".
  17. RE::=Recursive_enumerable,
  18. Recursive_enumerable::="Languages that are accepted in a finite time by a TM, but may be rejected by the TM failing to halt."
  19. complexity::="The largest run_time for any word with a given length" , usually expressed as an assymptotic (Big-O) formula.
  20. P::@problem=class of problems that a DTM can compute in polynomial_time.
  21. NP::@problem=class of problems that a NTM can compute in polynomial_time.
  22. polynomial_time::=An algorithm/TM is said to halt in polynomial time if the number of steps given n items of data is less than O(n^c) for some constant c.

    From Logic

  23. LOGIC::= See http://www.csci.csusb.edu/dick/maths/logic_25_Proofs.html


  24. (LOGIC)|- (ei): Existential instantiation -- if P is true of something then we can substitute a new variable for it.
  25. (LOGIC)|- (eg): existential generalization -- if P is true of this thing then it is true of something!
  26. (LOGIC)|- (given): a proposition that is assumed as part of a problem statement.
  27. (LOGIC)|- (let): a proposition that is assumed as part of a Let...End Let argument. Typically if A and B and C then D starts with assuming A,B, and C are given. They are called the hypotheses.
  28. (LOGIC)|- (goal): a proposition that we wish to prove. Typically the part after the then in an if...then... result. This is called the conclusion.
  29. (LOGIC)|- (def): By definition of a term defined above.
  30. (LOGIC)|- (algebra): Either by using well known algebraic results, or by use a series of algebraic steps.
  31. (LOGIC)|- (RAA): Reduce to absurdity. The end of a Let/Po/Case/Net that establishes the negation of the last set of assumptions/hypotheses.

End