>> [CNS] >> [Comp Sci Dept] >> [R J Botting] >> [CS546/646] >> 14 [Text Version]
Session: [01] [02] [03] [04] [05] [06] [07] [08] [09] [10] [11] [12] [13] (14) [15] [16] [17] [18] [19] [20]
Thu Jun 8 13:02:59 PDT 2006

# 14

## Schedule

 Date Meet'g Study(2 pts) Bring(5 pts) Topic(5 pts) Notes Mon May 15 13 Chapter 9 sections 9.4, 9.5.1, 9.6 Ex Post Correspondence+ Programs Wed May 17 14 Chapter 10 sections 10.1 Ex Intractable Problems P & NP Mon May 22 15 Chapter 10 sections 10.2, 10.3 Ex NP-Complete

## Handout -- First 3 pages of Garey and Johnson's book -- GJ75

See [ GJ75 ] below

## REVIEW FIRST -- Graphs and Big-O

For example: a tree is not a cycle!

## Reminders

1. Polynomial time means that there is a c such that it is O(n^c).
2. Exponential time means.... such that O(c^n).
3. A NTM is faster than a DTM... but solves the same problems.
4. An NTM computes guesses and then checks them all at the same time. It accepts if any guess passes the check process.
5. If it takes multitape TM s steps then there exists a single tape TM that take O(s^2) steps.

## Chapter 10 Intractable Problems 413

Be careful to distinguish two different questions:
1. How fast does a given TM take to solve the problem...
2. How fast can any TM take to solve the problem...

The first type of question is easier to answer and the subject of "Algorithm Analysis". The second question is much more interesting. Indeed it is unsolved for a very large number of problems.

Our goal is to separate problems that a decidable into those that have a fast simple algorithm from those that are slow and messy.

Be very careful to take note and memorize the definitions in this part of the chapter. They are used throughout the rest of the book.

### TYPO page 413

Replace "which implies that" by "which strongly suggests that" on the 7th from bottom line.

### 10.1 The Classes P and NP 414

#### 10.1.2 An Example: Kruskal's Algorithm 414

Page 418 remarks that the MWST problem can be encoded as a string of n symbols if it has less than O(n/log(n)) edges. I can derive O(n/log m). Is this a typo? Whatever: we only need O(n).

#### Ch 10.1 pp 415 -- n^(log n)

Are there any problems which have n^(log n) complexity, or is the existence of something that lies between polynomials and exponentials strictly of theoretical importance?

It is, in theory, trivial to create an artificial problem with just about any O(???(n)) time. I don't recall any famous ones with n^(log n) complexity.

#### Ch 10.1.3 pp 419 -- P=NP

Has anyone came close to solving P=NP? If anyone does solve it to be true how much would it effect the world of computing?

The theory of NP-completeness is a record of how close we have got.

If we had a constructive proof of P=NP then we would have a way to efficiently program a couple of thousand different problems. This would mean that we wouldn't have to invent new heuristics, approximations, and special cases to solve these problems.

A non-constructive proof would make the prover rich and famous and lead to a wild search for a constructive solution.

In either case the immediate practical impact would be on the security of systems. All encryption techniques rely on the inefficiency of algorithms that might crack the code. Nearly all rely on not being able to do NP efficiently (P). So if P=NP there will be a wild search for new encryption techniques and a lot of Internet code and protocols will then have to rewritten.

By the way -- most people who've studied the subject for a long time would bet that P != NP. If we could prove this then we would have to focus on heuristics approximations and special cases. Again some body becomes rich and famous, and most would breath a sigh of relief.

#### Ch 10.1 pp 419 -- The Classes P and NP

What is another example of a NP class besides the traveling salesman problem?

#### An Example NP problem involving numbers

Given a collection n integers (coded using decimal notation) and another number, find out if there is a subset of the collection that adds up to the extra given number.

I'll read out some more in class... then we have a dozen of so to come in the book.

#### Ch 10.1.4 pp 419 -- The Traveling Salesman Problem

Can you do an example of 10.1.4. The Traveling Salesman problem.

#### Ch 10.1.5 pp 421 -- Polynomial Time Reductions

Can you clarify why the existence of the "Construct" on page 421 is not sufficient to support the assertion that 'if P2 is not in P, then P1 is not in P'?

It shows that a machine exists but not that it is efficient enough. The book discusses this --- a bit muddled in my opinion. The accepted measure of efficiency is halting after a number of steps O(n^c) for a constant c -- so called polynomial time. And the preprocessing has therefore to by efficient by this measure for the argument to work.

#### Ch 10.1.5 pp 421 -- Polynomial-Time Reductions

Can you go over fig. 10.2, and describe the reduction P1 to P2

Figure 10.2 summarizes all the following thoughts except that I've used L1 and L2 rather than P1 and P2:

For languages L1 and L2, a function f: L1 >-> L2 takes strings in L1 and converts them into strings of symbols in L2.

Notice this is a many-to-one mapping: two strings may be reduced to the same string. Also: some strings in L2 may not be images of any strings in L1 under f.

A function f: L1 >-> L2 is computable is there is a TM that started with a string w:L1 just to the right of the head, it halts with f(w) left on the tape, just to the right of the head.

A function f: L1 >-> L2 is polynomial time computable is there is a TM that started with a string w:L1 just to the right of the head, it halts with f(w) left on the tape, just to the right of the head AND it has not taken more than O( |w|*k ) steps for some k.

#### Result hidden on Page 423

1. (above)|- (ps): If function f: L1 >-> L2 is polynomial time computable then |f(w)| is O(|w|^k) for some k. In other words |f(w)| is polynomial in |w|.

If we have two languages L1 and L2 and function f: L1 >-> L2 is polynomial time computable then we say that

2. L1 is polynomial time reducible to L2

Short hand:
1. L1 <=[p] L2.

You can prove these two results

1. (above)|- (reflexive): L1 <=[p] L2.
2. (above)|- (transitive): if L1 <=[p] L2 <=[p] L3 then L1 <=[p] L2.

#### 10.1.6 NP-Complete Problems 422

These are the hardest of the problems that a NTM can still solve in a feasible time.

We don't know whether there is a polynomial time DTM that can solve them.

Here is a numerical example of a NP-complete problem:

1. Given: A finite set A and a map s into the decimal integers, and a target total B.
2. Goal: For some subset of A, A' ( \Sigma[a:A'](s(a)) = B ).

[GJ71]

#### Ch 10.1 pp 423 -- NP-Hard

Could you please show in a diagram where NP-Hard relates to the other classes (P, NP, NP-Complete)?

#### Ch 10.1 pp 424 -- Exercise 10.1.2

Why is there no mention of the relation between a Hamilton circuit and a MWST?

#### Definitions of NP-completeness

2. NP::={ L : Language || for some NTM M(M always halts in polynomial time and accepts w iff w is in L)}.
3. NP_complete::={ L : NP || for all L': NP ( L' <=[p] L ) }.
4. NP_hard::={ L : \Sigma* || for all L': NP ( L' <=[p] L ) }.
5. (above)|-NP-complete = NP-hard & NP.

#### Ch 9, ch10 pp 422-423 -- Theorem 10.4

Could you go over the proof to Theorem 10.4 -- It probably should not have confused me, but it did.

#### Theorem 10.4 If P1 is NP_complete and P1 <=[p] P2 is in NP then P2 is NP-complete.

Structure of Proof
Let

1. (given)|- (1): P1 is NP_Complete.
2. (given)|- (2): P2 is NP.
3. (given)|- (3): P1 <=[p] P2.
4. (above)|- (goal1): P2 is NP_complete,

#### Proof of goal1

The definition of NP_complete means goal1 is equivalent to
5. P2 in NP and for all L:NP(L<=[p]P2).

So we first prove

6. (1, 2, 3)|- (goal2): for all all L:NP(L<=[p] P2).

Then

7. (goal2, 2)|- (4): P2 in NP and for all L:NP(L<=[p]P2),
8. (NP_complete)|-P2 is NP_complete

#### Proof of goal2

Let

1. (let)|- (5): L is NP_complete.
2. (1)|-L<=[p]P1,
3. (above)|-for some f:L->P1 and f is polynomial time computable,
4. (ei)|-f:L->P1 and f is polynomial time computable,
5. (def)|-for some polynomial p(n), f given input w halts in time p(|w|),
6. (ps)|- (6): for all w, |f(w)| is O(p(|w|).
7. (3)|-for some g:P1>->P2 and polynomial time computable,
8. (above)|-for some polynomial q, w, g(w) halts in O(q(|w|)),
9. (ei)|-q is a polynomial and for input w, g halts in O(q(|w|)),
10. (above)|-on input w, f(w) takes p(|w| and produces input f(w) where |f(w)| is O(p(n)),
11. (6)|- (7): The function g(f(w)) is computable in time q(p(|w|))+p(|w|).
12. (algebra)|-q(p(|w|))+p(|w|) is a polynomial,
13. (h=g o f, 7)|-for some h : L>->P2 and h is polynomial time,
14. (above)|-L<=[p]P2

(Close Let )

9. (above)|-for all all L:NP(L<=[p] P2).

(Close Let )

#### 10.1.7 Exercises for Section 10.1 423

##### TYPOs !! Exercise 10.1.4
A comma in the second from last line of page 424 should be one line earlier.

` 		Where can in come from`
should be
` 		Where can it come from`
##### SKIP Exercise 10.1.5

. . . . . . . . . ( end of section 14) <<Contents | End>>

[ 15.html ]

# GJ75

1. Michael R Garey & David S Johnson
2. Computers and Intractability: A Guide to the theory of NP-Completeness
3. Bell Telephone Labs 1979
4. =THEORY COMPLEXITY INTRACTABILITY PERFORMANCE ALGORITHMS

(Ex): Do as many of the relevant exercises as you have time for. You may work in a team of up to 4 students and hand in one joint solution. Bring to class one written solution to an exercise. This must not be a solution to an exercise marked with an asterisk(*) to earn full credit. One of the authors will be invited to present the solution to the class -- failure will loose points. Students taking CS646 must hand in the solution to an exercise marked with an exclamation mark(!) to earn full credit.
(Study): Read & think about the assigned items. Submit one question by selecting the submit button at the top of the web page. To earn complete credit you need to do this at least 90 minutes before the start of class. Hints. Read each section twice -- once the easy bits and then the tough bits. Use a scratch pad and pencil as you read.
(Topic): To earn all the possible points you must: turn up on time, not leave early, present work when called upon, and take part in discussions.

# Standard Definitions

1. FSA::="Finite State Acceptor/Automata", a theoretical machine with a finite set of states.
2. ND::="Non-deterministic", a machine that can try out many possibilities and always guesses the right choice. Note: For a given problem a ND solution is simpler and easier to understand than a deterministic one. We study ND machines to discover how to use them as high-level designs for more complicated deterministic machines. Plus it's fun.
3. PDA::="Push down Automata", a machine with a finite control and a single stack for remembering intermediate data. Kind of like half a TM.
4. RJB::=The author of this document, [ ../index.html ]
5. |-RJB="Richard J Botting, Comp Sci Dept, CSUSB".
6. Schedule::= See http://www.csci.csusb.edu/dick/cs546/schedule.html.
7. Search::= See http://www.csci.csusb.edu/dick/cs546/lookup.php.
8. Syllabus::= See http://www.csci.csusb.edu/dick/cs546/syllabus.html, and also see [ syllabus.html ] (my generic syllabus).
9. TBA::="To Be Announced".
10. TM::="Turing Machine".
11. NTM::="Nondeterministic Turing Machine".
12. nondeterministic::="A machine that can make choices as it runs and accepts an input iff any sequence of choices leads to acceptance."
13. DTM::="Deterministic Turing Machine".
14. runtime::=run_time,
15. run_time::="The number of steps a TM takes before it halts when start on a given input".
16. complexity::="The largest run_time for any word with a given length" , usually expressed as an assymptotic (Big-O) formula.
17. P::@problem=class of problems that a DTM can compute in polynomial_time.
18. NP::@problem=class of problems that a NTM can compute in polynomial_time.
19. polynomial_time::=An algorithm/TM is said to halt in polynomial time if the number of steps given n items of data is less than O(n^c) for some constant c.

# From Logic

20. LOGIC::= See http://www.csci.csusb.edu/dick/maths/logic_25_Proofs.html

21. (LOGIC)|- (ei): Existential instantiation -- if P is true of something then we can substitute a new variable for it.
22. (LOGIC)|- (eg): existential generalization -- if P is true of this thing then it is true of something!
23. (LOGIC)|- (given): a proposition that is assumed as part of a Let...End Let argument. Typically if A and B and C then D starts with assuming A,B, and C are given. They are called the hypotheses.
24. (LOGIC)|- (goal): a proposition that we wish to prove. Typically the part after the then in an if...then... result. This is called the conclusion.
25. (LOGIC)|- (def): By definition of a term defined above.
26. (LOGIC)|- (algebra): Either by using well known algebraic results, or by use a series of algebraic steps.
27. (LOGIC)|- (RAA): Reduce to absurdity. The end of a Let/Po/Case/Net that establishes the negation of the last set of assumptions/hypotheses.