The first type of question is easier to answer and the subject of
"Algorithm Analysis". The second question is much more interesting.
Indeed it is unsolved for a very large number of problems.
Our goal is to separate problems that a decidable into those that
have a fast simple algorithm from those that are slow and messy.
Be very careful to take note and memorize the definitions in this
part of the chapter. They are used throughout the rest of the book.
10.1.1 Problems Solvable in Polynomial Time 414
10.1.2 An Example: Kruskal's Algorithm 414
Page 418 remarks that the MWST problem can be encoded as a string
of n symbols if it has less than O(n/log(n)) edges. I can
derive O(n/log m). Is this a typo? Whatever: we only need O(n).
Are there any problems which have n^(log n) complexity, or is the existence of something that lies between polynomials and exponentials strictly of theoretical importance?
It is, in theory, trivial to create an artificial problem with just about any
O(???(n)) time. I don't recall any famous ones with n^(log n) complexity.
Ch 10.1.3 pp 419 -- P=NP
Has anyone came close to solving P=NP? If anyone does solve it to be true how much would it effect the world of computing?
The theory of NP-completeness is a record of how close we have got.
If we had a constructive proof of P=NP then we would have a way to
efficiently program a couple of thousand different problems. This
would mean that we wouldn't have to invent new heuristics, approximations,
and special cases to solve these problems.
A non-constructive proof would make the prover rich and famous
and lead to a wild search for a constructive solution.
In either case the immediate practical impact would be on the security
of systems. All encryption techniques rely on the inefficiency of
algorithms that might crack the code. Nearly all rely on not
being able to do NP efficiently (P). So if P=NP there will be a
wild search for new encryption techniques and a lot of Internet
code and protocols will then have to rewritten.
By the way -- most people who've studied the subject for a long time
would bet that P != NP. If we could prove this then we would have to focus
on heuristics approximations and special cases. Again some body becomes
rich and famous, and most would breath a sigh of relief.
Ch 10.1 pp 419 -- The Classes P and NP
What is another example of a NP class besides the traveling salesman problem?
Given a collection n integers (coded using decimal notation) and another
number, find out if there is a subset of the collection that adds
up to the extra given number.
I'll read out some more in class... then we have a dozen of so to come
in the book.
Can you do an example of 10.1.4. The Traveling Salesman problem.
Can you clarify why the existence of the "Construct" on page 421 is not sufficient to support the assertion that 'if P2 is not in P, then P1 is not in P'?
It shows that a machine exists but not that it is efficient enough. The
book discusses this --- a bit muddled in my opinion. The accepted measure
of efficiency is halting after a number of steps O(n^c) for a constant c --
so called polynomial time. And the preprocessing has therefore to
by efficient by this measure for the argument to work.
Can you go over fig. 10.2, and describe the reduction P1 to P2
Figure 10.2 summarizes all the following thoughts except that I've used
L1 and L2 rather than P1 and P2:
For languages L1 and L2,
a function f: L1 >-> L2 takes strings in L1 and converts
them into strings of symbols in L2.
Notice this is a many-to-one mapping: two strings may be reduced to
the same string. Also: some strings in L2 may not be images of any
strings in L1 under f.
A function f: L1 >-> L2 is computable is there is a TM that started
with a string w:L1 just to the right of the head, it halts with
f(w) left on the tape, just to the right of the head.
A function f: L1 >-> L2 is polynomial time computable is there is a TM
that started
with a string w:L1 just to the right of the head, it halts with
f(w) left on the tape, just to the right of the head AND it has not
taken more than O( |w|*k ) steps for some k.
- (above)|- (ps): If function f: L1 >-> L2 is polynomial time computable then |f(w)| is O(|w|^k) for some k.
In other words |f(w)| is polynomial in |w|.
If we have two languages L1 and L2 and function f: L1 >-> L2 is polynomial time computable then we say that
- L1 is polynomial time reducible to L2
Short hand:
- L1 <=[p] L2.
You can prove these two results
- (above)|- (reflexive): L1 <=[p] L2.
- (above)|- (transitive): if L1 <=[p] L2 <=[p] L3 then L1 <=[p] L2.
10.1.6 NP-Complete Problems 422
These are the hardest of the problems that a NTM can still solve in
a feasible time.
We don't know whether there is a polynomial time DTM that can solve them.
Here is a numerical example of a NP-complete problem:
- Given: A finite set A and a map s into the decimal integers, and a target total B.
- Goal: For some subset of A, A' ( \Sigma[a:A'](s(a)) = B ).
[GJ71]
Could you please show in a diagram where NP-Hard relates to the other classes (P, NP, NP-Complete)?
Click here
[ complexity1.gif ]
Why is there no mention of the relation between a Hamilton circuit and a MWST?
Definitions of NP-completeness
- NP::={ L : Language || for some NTM M(M always halts in polynomial time and accepts w iff w is in L)}.
- NP_complete::={ L : NP || for all L': NP ( L' <=[p] L ) }.
- NP_hard::={ L : \Sigma* || for all L': NP ( L' <=[p] L ) }.
- (above)|-NP-complete = NP-hard & NP.
Could you go over the proof to Theorem 10.4 -- It probably should not have confused me, but it did.
Structure of Proof
Let
- (given)|- (1): P1 is NP_Complete.
- (given)|- (2): P2 is NP.
- (given)|- (3): P1 <=[p] P2.
- (above)|- (goal1): P2 is NP_complete,
Proof of goal1
The definition of NP_complete means goal1 is equivalent to
- P2 in NP and for all L:NP(L<=[p]P2).
So we first prove
- (1, 2, 3)|- (goal2): for all all L:NP(L<=[p] P2).
Then
- (goal2, 2)|- (4): P2 in NP and for all L:NP(L<=[p]P2),
- (NP_complete)|-P2 is NP_complete
Let
- (let)|- (5): L is NP_complete.
- (1)|-L<=[p]P1,
- (above)|-for some f:L->P1 and f is polynomial time computable,
- (ei)|-f:L->P1 and f is polynomial time computable,
- (def)|-for some polynomial p(n), f given input w halts in time p(|w|),
- (ps)|- (6): for all w, |f(w)| is O(p(|w|).
- (3)|-for some g:P1>->P2 and polynomial time computable,
- (above)|-for some polynomial q, w, g(w) halts in O(q(|w|)),
- (ei)|-q is a polynomial and for input w, g halts in O(q(|w|)),
- (above)|-on input w, f(w) takes p(|w| and produces input f(w) where |f(w)| is O(p(n)),
- (6)|- (7): The function g(f(w)) is computable in time q(p(|w|))+p(|w|).
- (algebra)|-q(p(|w|))+p(|w|) is a polynomial,
- (h=g o f, 7)|-for some h : L>->P2 and h is polynomial time,
- (above)|-L<=[p]P2
(Close Let )
- (above)|-for all all L:NP(L<=[p] P2).
(Close Let )
TYPOs !! Exercise 10.1.4
A comma in the second from last line of page 424 should be
one line earlier.
Where can in come from
should be
Where can it come from
SKIP Exercise 10.1.5