Professional Documents
Culture Documents
• V reads the bits in the C randomly chosen locations from the proof and does the test φ on
them, accepting or rejecting.
• Completeness: If x ∈ L then P can write a proof that V accepts with probability 1.
• Soundness: For every x ∉ L, no matter what proof P writes, V accepts with probability at
most 1/2.
Remark 1.2 This P.C.P. system has “one-sided error”: true statements are always accepted, but
there is a chance a verifier might accept a bogus proof. Note that this chance can be made an
arbitrarily small constant by naive repetition; for example, V can repeat its same spot-check 100
times independently, thus reading 100C bits and accepting false proofs with probability at most
2−100.
hardness of approximation studies the algorithmic complexity of finding near-optimal
solutions to optimization problems. Instead of making a formal definition we will just give some
examples. Briefly, these are “find the best solution” versions of classic NP-complete problems.
Definition 1.2 MAX-E3SAT: Given an E3CNF formula — i.e., a conjunction of “clauses” over
boolean variables x1, . . . , xn, where a clause is an OR of exactly 3 literals, xi or xi find an
assignment to the variables satisfying as many clauses as possible.
It is well-known that this problem is NP-hard. MAXE3SAT, there was a polynomial time
algorithm with the following guarantee: Whenever the input instance has optimum OPT — i.e.,
there is an assignment satisfying OPT many clauses — the algorithm returns a solution satisfying
99.9% × OPT many clauses. Such an algorithm would be highly useful, and would tend to refute
the classical notion that the NP-hardness of MAX-E3SAT means there is no good algorithm for it.
Let us make some definitions to capture these notions:
x ∉ L⇒ Prob[M(x) = 0] = 1
x ∈ L⇒ Prob[M(x) = 1] = 1
1
x ∉ L⇒ Prob[M(x) = 0] ≥ 2
Some examples to demonstrate how randomness can be a useful tool in computation includes
Probabilistic Primality Testing, Polynomial identity testing and Testing for perfect matching in a
bipartite graph. For farther understanding on these terms refer your text book.
we have the following relations between the probabilistic complexity classes:
ZPP = RP ∩ coRP
RP ⊆ BPP
coRP ⊆ BPP
RP, coRP and ZPP are subclasses of BPP corresponding to probabilistic algorithms with one-sided
and “zero-sided” error.
The output of Ci is the bit that this player will communicate in the ith round. Finally, f1, f2 are 0/1
valued functions that the players apply at the end of the protocol to their inputs as well as the
communication pattern in the t rounds in order to compute the output. These two outputs must be
f(x, y). The communication complexity of f is
C(f) = min max {Number of bits exchanged by P on x, y.}
Protocols P x, y
Notice, C(f) ≤ n + 1 since the trivial protocol is for one player to communicate his entire input,
whereupon the second player computes f(x, y) and communicates that single bit to the first. Can
they manage with less communication?
Communication Complexity Classes
We can define the following communication complexity classes:
PCC, RPCC, BPPCC, NPCC, PPCC
as the set of functions f(x, y) that can be solved with poly-logarithmic communication complexity.
RPCC are those communication complexity functions that can be solved in time polylog(n) by a
randomized, 1-sided error protocol. That is, on "yes" instances, the protocol is correct with
probability 1, and on "no" instances the protocol errs with probability at most 1/4.
BPPCC are those functions that can be solved in time polylog(n) by a randomized, 1-sided error
protocol. On "yes" instances, the protocol is correct with probability 3/4, and on "no" instances the
protocol is also correct with probability 3/4. Note that the error is 2-sided and bounded away from
1/2.
PPCC are those functions that can be solved in time polylog(n) by a randomized protocol with
unbounded error. That is, on all instances, the protocol is correct with probability greater than 1/2.
Finally, ZPPCC are those functions that can be solved in average case time polylog(n) by a
randomized protocol with zero error.
Remark: Unlike the computational complexity classes where NO non-trivial relationship is known,
here we know almost everything, i.e.,
PCC⊊RPCC⊊BPPCC⊊NPCC:
system with 2 basis states, call them |0› and |1›. We identify these basis states with the two
it. RSA algorithm (stands for the names of its inventors Rivest, Shamir, and Adleman) is an
example for public key cryptography protocol.
Note you can study how the RSA works from text book named complexity theory, chapter 8
RSA works with the assumption that there is no polynomial time algorithm for factoring. (In
contrast, note that we can decide in polynomial time whether a number is prime or not).
One-way functions
Informally, a one-way (or trapdoor) function is a function that is easy to compute, but hard to
invert. Such functions may serve as the basis for a cryptographic scheme.
Note first that some easily computable functions may be hard to invert for trivial reasons. For
example, if the function is not injective then there is no inverse at all. We may counter this by
requiring that an inverse should compute some inverse value, that need not be unique. A second
trivial reason that a function may not be easily invertible is that it might map large strings to small
ones.
Note that the existence of one-way functions implies that P ≠ NP. (Because of the honesty,
computing the – unique – inverse of f(x) is a typical NP task: we can guess an inverse and use f to
check its correctness.) Even if we assume that P ≠ NP, however, the existence of one-way
functions is not known. Their existence is tied to a special complexity class called the class UP.
Definition 8.4.1. Call a nondeterministic Turing machine unambiguous if for every input there is
at most one accepting computation. UP is the class of sets that are accepted by some unambiguous
machine in polynomial time.
Obviously, we have the inclusions P ⊆ UP ⊆ NP. Nothing more than this is known. The following
result ties the class UP to the existence of one-way functions.
Conversely, if L is a set in UP−P then we can define a one-way function f as follows. Suppose M
is an unambiguous machine accepting L. Given a computation path y of M on input x, we let f
map y to 1^x if y is accepting, and to 0^y if y is not accepting. Note that x can be effectively
retrieved from y. Since accepting paths of M are unique, f is injective. Also, f is honest since the
computations of M are of polynomial length. So to prove that f is one-way it suffices to show that
its inverse cannot be computed in polynomial time. (Note that f is not surjective, so the inverse is
only defined on the range of f.) Suppose we could do this. Then we could decide L in polynomial
time as follows: Given x, compute f−1(1^x). If this yields a y we know that x ∈ L, and if not we
know that x ∉ L. This contradicts the assumption that L∉ P, hence f is one-way.