For any language $L$, $L\; \backslash in\; IP\; \backslash Rightarrow\; \backslash exists\; V,\; P\; |\; \backslash forall\; Q,\; w$: * $w\; \backslash in\; L\; \backslash Rightarrow\; Pr[V\; \backslash leftrightarrow\; P\backslash \; accepts\backslash \; w]\; \backslash ge\; \backslash frac\{2\}\{3\}$ * $w\; \backslash not\; \backslash in\; L\; \backslash Rightarrow\; Pr[V\; \backslash leftrightarrow\; Q\; accepts\backslash \; w]\; \backslash le\; \backslash frac\{1\}\{3\}$ The [[Arthur-Merlin protocol]], introduced by [[Laszlo Babai]], is similar in nature, except that the number of rounds of interaction is bounded by a constant rather than a polynomial. Goldwasser et al have shown that ''public-coin'' protocols, where the random numbers used by the verifier are provided to the prover along with the challenges, are no more powerful than private-coin protocols. At most two additional rounds of interaction are required to replicate the effect of a private-coin protocol. In the following section we prove that $IP\; =\; PSPACE$, an important theorem in computational complexity, which demonstrates that an interactive proof system can be used to decide whether a string is a member of a language in polynomial time, even though the traditional '''[[PSPACE]]''' proof may be exponentially long. ==Proof that IP = PSPACE== In order to prove that '''IP''' and '''PSPACE''' are equal, we show that '''IP''' is a subset of '''PSPACE''' and also that '''PSPACE''' is a subset of '''IP''', and hence the two are equivalent. In order to demonstrate that $IP\; \backslash subseteq\; PSPACE$, we present a simulation of an interactive proof system by a polynomial space machine. To prove that $PSPACE\; \backslash subseteq\; IP$, we show that the '''PSPACE'''-complete language TQBF is in '''IP'''. Both parts of the proof are adapted from Sipser. ===IP is a subset of PSPACE=== Let A be a language in '''IP'''. Now, assume that on input w with length n, A's verifier V exchanges exactly $p=p(n)$ messages. We now construct a machine M that simulates V and is in '''PSPACE'''. To do this, we define our machine as follows: $Pr[V\backslash \; accepts\backslash \; w]\; =\; max\_P\; Pr[V\backslash \; \backslash leftrightarrow\; P\; accepts\backslash \; w]$ By the definition of $IP$, we have $Pr[V\backslash \; accepts\backslash \; w]\; \backslash ge\; \backslash frac\{2\}\{3\}$ if $w\; \backslash in\; A$ and $Pr[V\backslash \; accepts\backslash \; w]\; \backslash le\; \backslash frac\{1\}\{3\}$ if $w\; \backslash not\; \backslash in\; A$.

Now, it must be shown that the value can be calculated in polynomial space. Here we take $M\_j$ denote to denote this sequence of messages, $m\_1\backslash \#\; \backslash ldots\; \backslash \#m\_j$, exchanged by the prover and the verifier, and we generalize the interaction of V and P to start with an arbitrary message stream $M\_j$. We take $(V\; \backslash leftrightarrow\; P)(w,r,M\_j)\; =\; accept$ if $M\_j$ can be extended with the messages $m\_\{j+1\}$ through $m\_p$ such that: * For $j\; \backslash leq\; i\; <\; p$, where i is even, $V(w,\; r,\; M\_i)\; =\; m\_\{i+1\}$ * For $j\; \backslash leq\; i\; <\; p$, where i is odd, $P(w,\; r,\; M\_i)\; =\; m\_\{i+1\}$ * The final message $m\_p$ in the message history is accept In other words, when $i$ is even, the verifier sends a message, when it is odd, the prover sends a message, and the final message is to accept. The first two rules ensure that the message sequence is valid, and the third ensures that this message sequence leads to an accept. Next, further generalizing the earlier definitions, and taking a random string $r$ of length $p$, we define: $Pr[V\; \backslash leftrightarrow\; P\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]\; =\; Pr[(V\; \backslash leftrightarrow\; P)(w,r,M\_j)\; =\; accept]$ Now, we can define: $Pr[V\; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]\; =\; max\_P\; Pr[V\; \backslash leftrightarrow\; P\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]$ and for every $0\; \backslash leq\; j\; \backslash leq\; p$ and every message history $M\_j$, we inductively define the function $N\_\{M\_j\}$: $N\_\{M\_j\}\; =\; \backslash begin\{cases\}\; 0:\; j\; =\; p\backslash \; and\backslash \; m\_p\; =\; reject\backslash \backslash \; 1:\; j\; =\; p\backslash \; and\backslash \; m\_p\; =\; accept\backslash \backslash \; max\_\{m\_\{j+1\}\}\; N\_\{M\_\{j+1\}\}:\; j\; <\; p\backslash \; and\backslash \; j\backslash \; is\backslash \; odd\backslash \backslash \; wt-avg\_\{m\_\{j+1\}\}\; N\_\{M\_\{j+1\}\}:\; j\; <\; p\backslash \; and\backslash \; j\backslash \; is\backslash \; even\backslash \backslash \; \backslash end\{cases\}$ where the term $wt-avg\_\{m\_\{j+1\}\}N\_\{M\_\{j+1\}\}$ is defined as follows: $wt-avg\_\{m\_\{j+1\}\}N\_\{M\_\{j+1\}\}\; =\; \backslash sum\_\{m\_\{j+1\}\}(Pr\_r[V(w,r,M\_j)])$ where $Pr\_r$ is the probability taken over the random string $r$ of length $p$. This expression is the average of $N\_\{M\_\{j+1\}\}$, weighted by the probability that the verifier sent message $m\_\{j+1\}$. Take $M\_0$ to be the empty message sequence, here we will show that $N\_\{M\_0\}$ can be computed in polynomial space, and that $N\_\{M\_0\}\; =\; Pr[V\backslash \; accepts\backslash \; w]$. First, to compute $N\_\{M\_0\}$, an algorithm can recursively calculate the values $N\_\{M\_j\}$ for every j and $M\_j$. Since the depth of the recursion is p, only polynomial space is necessary. The second requirement is that we need $N\_\{M\_0\}\; =\; Pr[V\; accepts\; w]$, the value needed to determine whether w is in A. We use induction to prove this as follows. We must show that for every $0\; \backslash leq\; j\; \backslash leq\; p$ and every $M\_j$, $N\_\{M\_j\}\; =\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\; M\_j]$, and we will do this using induction on j. The base case is to prove for $j\; =\; p$. Then we will use induction to go from p down to 0. The base case $(j\; =\; p)$ is fairly simple. Since $m\_p$ is either accept or reject, if $m\_p$ is accept, $N\_\{M\_p\}$ is defined to be 1 and Pr[V accepts w starting at $M\_j$] = 1 since the message stream indicates acceptance, thus the claim is true. If $m\_p$ is reject, the argument is very similar. For the inductive hypothesis, we assume that for some $j\; +\; 1\; \backslash leq\; p$ and any message sequence $M\_\{j+1\}$, $N\_\{M\_\{j+1\}\}\; =\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; j+1]$ and then prove the hypothesis for $j$ and any message sequence $M\_j$. If j is even, $m\_\{j+1\}$ is a message from V to P. By the definition of $N\_\{M\_j\}$, $N\_\{M\_j\}\; =\; \backslash sum\_\{m\_\{j+1\}\}(Pr\_r[V(w,r,M\_j)=m\_\{j+1\}]\; N\_\{M\_\{j+1\}\})$. Then, by the inductive hypothesis, we can say this is equal to $\backslash sum\_\{m\_\{j+1\}\}(Pr\_r[V(w,r,M\_j)=m\_\{j+1\}]\; *\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_\{j+1\}])$. Finally, by definition, we can see that this is equal to $Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]$. If j is odd, $m\_\{j+1\}$ is a message from P to V. By definition, $N\_\{M\_j\}\; =\; max\_\{m\_\{j+1\}\}\; N\_\{M\_\{j+1\}\}$. Then, by the inductive hypothesis, this equals $max\_\{m\_\{j+1\}\}\; *\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_\{j+1\}]$. This is equal to $Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]$ since: $max\_\{m\_\{j+1\}\}\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_\{j+1\}]\; \backslash leq\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]$ because the prover on the right-hand side could send the message $m\_\{j+1\}$ to maximize the expression on the left-hand side. And: $max\_\{m\_\{j+1\}\}\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_\{j+1\}]\; \backslash geq\; Pr[V\backslash \; accepts\backslash \; w\backslash \; starting\backslash \; at\backslash \; M\_j]$ Since the same Prover cannot do any better than send that same message. Thus, this holds whether $i$ is even or odd and the proof that '''IP''' $\backslash subseteq$ '''PSPACE''' is complete. Here we have constructed a polynomial space machine that uses the best prover $P$ for a particular string $w$ in language $A$. We use this best prover in place of a prover with random input bits because we are able to try every set of random input bits in polynomial space. Since we have simulated an interactive proof system with a polynomial space machine, we have shown that '''IP''' $\backslash subseteq$ '''PSPACE''', as desired.

$\backslash Box$ ===PSPACE is a subset of IP=== In order to illustrate the technique that will be used to prove $PSPACE\; \backslash subseteq\; IP$, we will first prove a weaker theorem, which was proven by Lund, et al.: $\backslash \#SAT\; \backslash in\; PSPACE$. Then using the concepts from this proof we will extend it to show that $TQBF\; \backslash in\; PSPACE$. Since TQBF $\backslash in$ PSPACE-Complete, and $TQBF\; \backslash in\; IP$ then PSPACE $\backslash subseteq$ IP. ====#SAT is a member of IP==== We begin by showing that $\backslash \#SAT\; \backslash in\; IP$, where:

$\backslash \#SAT\; =\; \backslash \{\; \backslash langle\; \backslash phi,\; k\; \backslash rangle\; \backslash mid\; \backslash phi$ is a cnf-formula with exactly $k$ satisfying assignments $\backslash \}$.

Note that this is different from the normal definition of [[Sharp-P|#SAT]], in that it is a decision problem, rather than a function. First we use arithmetization to map the boolean formula with $n$ variables, $\backslash phi(b\_1,\; b\_2,\; ...,\; b\_n)$ to a polynomial $p\_\backslash phi(x\_1,\; x\_2,\; ...,\; x\_n)$, where $p\_\backslash phi$ mimics $\backslash phi$ in that $p\_\backslash phi$ is 1 if $\backslash phi$ is true and 0 otherwise provided that the variables of $p\_\backslash phi$ are assigned Boolean values. The Boolean operations $\backslash vee$, $\backslash wedge$, and $\backslash neg$ used in $\backslash phi$ are simulated in $p\_\backslash phi$ by replacing the operators in $\backslash phi$ as shown in the table below. {| border="1" cellpadding="2" |$a\; \backslash wedge\; b$ || $ab$ |- |$a\; \backslash vee\; b$ || $a*b\; =\_\{def\}\; 1\; -\; (1\; -\; a)(1\; -\; b)$ |- |$\backslash neg\; a$ || $1\; -\; a$ |+ Arithmetization rules for converting a Boolean formula $\backslash phi(b\_1,\; b\_2,\; ...,\; b\_n)$ to a polynomial $p\_\backslash phi(x\_1,\; x\_2,\; ...,\; x\_n)$ |} As an example, $\backslash phi\; =\; a\; \backslash wedge\; b\; \backslash vee\; \backslash neg\; c$ would be converted into a polynomial as follows: * $p\_\backslash phi\; =\; a\; \backslash wedge\; b\; \backslash vee\; \backslash neg\; c$ * $p\_\backslash phi\; =\; a\; \backslash wedge\; (1\; -\; (1-b)(1-(1-c)))$ * $p\_\backslash phi\; =\; a\; (1\; -\; (1-b)c)$ * $p\_\backslash phi\; =\; a\; -\; (ac-abc)$ The operations $ab$ and $a*b$ each result in a polynomial with a degree bounded by the sum of the degrees of the polynomials for $a$ and $b$ and hence, the degree of any variable is at most the length of $\backslash phi$. Now let $F$ be a finite field with order $q\; >\; 2^n$; also demand that q be at least 1000. For each $0\backslash leq\; i\backslash leq\; n$, define a function $f\_i$ on F, having parameters $a\_1,\; ...,\; a\_\{i-1\}\backslash in\; F$, and a single variable $a\_i\backslash in\; F$: For $0\; \backslash leq\; i\; \backslash leq\; n$ and for $a\_1,\; ...,\; a\_i\; \backslash in\; F$ let $f\_i(a\_1,\; ...,\; a\_i)\; =\; \backslash Sigma\; \_\{a\_\{i+1\},\; ...,\; a\_n\; \backslash in\; \backslash \{0,\; 1\backslash \}\}\; p(a\_1,\; ...,\; a\_n)$. Note that the value of $f\_0$ is the number of satisfying assignments of $\backslash phi$. $f\_0$ is a void function, with no variables. Now the protocol for $\backslash \#SAT$ works as follows: * '''Phase 0''':

The prover $P$ choses a prime $q\; >\; 2^n$ and computes $f$, it then sends $q$ and $f\_0()$ to the verifier $V$. $V$ checks that $q$ is a prime greater than $max(1000,\; 2^n)$ and that $f\_0()\; =\; k$. * '''Phase 1''':

$P$ sends the coefficients of $f\_1(z)$ as a polynomial in z. $V$ verifies that the degree of $f\_1$ is less than $n$ and that $f\_0\; =\; f\_1(0)\; +\; f\_1(1)$. (If not $V$ rejects). $V$ now sends a random number $r\_1$ from $F$ to $P$. * '''Phase i''':

$P$ sends the coefficients of $f\_i(r\_1,\; ...,\; r\_\{i-1\},\; z)$ as a polynomial in $z$. $V$ verifies that the degree of $f\_i$ is less than $n$ and that $f\_\{i-1\}(r\_1,\; ...,\; r\_\{i-1\})\; =\; f\_i(r\_1,\; ...,\; r\_\{i-1\},\; 0)\; +\; f\_i(r\_1,\; ...,\; r\_\{i-1\},\; 1)$. (If not $V$ rejects). $V$ now sends a random number $r\_i$ from $F$ to $P$. * '''Phase n+1''':

$V$ evaluates $p(r\_1,\; ...,\; r\_n)$ to compare to the value $f\_n(r\_1,\; ...,\; r\_n)$. If they are equal $V$ accepts, otherwise $V$ rejects. Note that this is a public-coin algorithm. If $\backslash phi$ has $k$ satisfying assignments, clearly $V$ will accept. If $\backslash phi$ does not have $k$ satisfying assignments we assume there is a prover $\backslash tilde\; P$ that tries to convince $V$ that $\backslash phi$ does have $k$ satisfying assignments. We show that this can only be done with low probability. To prevent $V$ from rejecting in phase 0, $\backslash tilde\; P$ has to send an incorrect value $\backslash tilde\; f\_0()$ to $P$. Then, in phase 1, $\backslash tilde\; P$ must send an incorrect polynomial $\backslash tilde\; f\_1$ with the property that $\backslash tilde\; f\_1(0)+\backslash tilde\; f\_1(1)\; =\; \backslash tilde\; f\_0()$. When $V$ chooses a random $r\_1$ to send to $P$, $Pr[\backslash tilde\; f\_1(r\_1)\; =\; f\_1(r\_1)]\; <\; n^\{-2\}$. This is because a polynomial in a single variable of degree at most $d$ can have no more than $d$ roots (unless it always evaluates to 0). So, any two polynomials in a single variable of degree at most $d$ can be equal only in $d$ places. Since $|F|\; >\; 2^n$ the chances of $r\_1$ being one of these values is at most $n/2^n\; <\; n/n^3$ if n > 10, or at most $n/1000\; \backslash leq\; n/n^3$ if $n\; \backslash leq\; 10$. Generalizing this idea for the other phases we have for each $1\; \backslash leq\; i\; \backslash leq\; n$ if $\backslash tilde\; f\_\{i-1\}(r\_1,\; ...,\; r\_\{i-1\})\; \backslash not=f\_\{i-1\}(r\_1,\; ...,\; r\_\{i-1\})$, then for $r\_i$ chosen randomly from $F$, $Pr[\backslash tilde\; f(r\_1,\; ...,\; r\_i)\; =\; f\_i(r\_1,\; ...,\; r\_i)]\; \backslash leq\; n^\{-2\}$. There are $n$ phases, so the probability that $\backslash tilde\; P$ is lucky because $V$ selects at some stage a convenient $r\_i$ is at most $1/n$. So, no prover can make the verifier accept with probability greater than $1/n$. We can also see from the definition that the verifier $V$ operates in probabilistic polynomial time. Thus, $\backslash \#SAT\; \backslash in\; IP$. ====TQBF is a member of IP==== In order to show that '''PSPACE''' is a subset of '''IP''', we need to choose a '''PSPACE-Complete''' problem and show that it is in '''IP'''. Once we show this, then it clear that '''PSPACE''' $\backslash subseteq$ '''IP'''. The proof technique demonstrated here is credited to [[Adi Shamir]] We know that TQBF is in '''PSPACE-Complete'''. So let $\backslash psi$ be a quantified boolean expression: $\backslash psi\; =\; Q\_1\; x\_1\; Q\_2x\_2...Q\_mx\_m[\backslash phi]$ where $\backslash phi$ is a CNF formula. Then $Q\_i$ is a quantified, either $\backslash exists$ or $\backslash forall$. Now $f\_i$ is the same as in the previous proof, but now it also includes quantifiers. $f\_i(a\_1,\; ...,\; a\_i)\; =\; \backslash begin\{cases\}\; f\_i(a\_1,\; a\_2,...a\_m)\; =\; 1~if~\; Q\_\{i+1\}x\_\{i+1\}...Q\_mx\_m[\backslash phi(a\_1,a\_2...a\_i)]~is~true\backslash \backslash \; 0~otherwise\; \backslash end\{cases\}$ Here, $\backslash phi(a\_1,a\_2,...,a\_i)$ is $\backslash phi$ with $a\_1$ to $a\_i$ substituted for $x\_1$ to $x\_i$. Thus $f\_0()$ is the truth value of $\backslash psi$. In order to arithmetize $\backslash psi$ we must use the following rules: $If\backslash \; Q\_\{i+1\}\; =\; \backslash forall\; ,\; f\_i(a\_1,a\_2,....a\_i)\; =\; f\_\{i+1\}(a\_1,a\_2,...,a\_i,0)\backslash cdot\; f\_\{i+1\}(a\_1,a\_2,...,a\_i,1)$ $If\backslash \; Q\_\{i+1\}\; =\; \backslash exists,\; f\_i(a\_1,a\_2,....a\_i)\; =\; f\_\{i+1\}(a\_1,a\_2,...,a\_i,0)\; *\; f\_\{i+1\}(a\_1,a\_2,...,a\_i,1)$ where as before we define x * y = 1-(1-x)(1-y). By using the method described in $\backslash \#SAT$, we must face a problem that for any $f\_i$ the degree of the resulting polynomial may double with each quantifier. In order to prevent this, we must introduce a new reduction operator R which will reduce the degrees of the polynomial without changing their behavior on Boolean inputs. So now before we arithmetize $\backslash psi\; =\; Q\_1\; x\_1\; Q\_2x\_2...Q\_mx\_m[\backslash phi]$ we introduce a new expression: $\backslash psi\text{'}\; =\; Q\_1\; R\_\{x\_1\}\; Q\_2\; R\_\{x\_2\}...Q\_m\; R\_\{x\_m\}[\backslash phi]$ Or written another way: $\backslash psi\text{'}\; =\; S\_1\; y\_1\; S\_2\; y\_2...S\_m\; y\_m[\backslash phi],\; where\backslash \; S\_i\; \backslash in\; \backslash \{\; \backslash forall\; ,\backslash exists\; ,\; R\backslash \},\; and\backslash \; y\_i\; \backslash in\; \backslash \{\; x\_1,x\_2,...x\_m\backslash \}$ Now for every i $\backslash leq$ k we define the function $f\_i$. We also define $f\_k(x\_1,x\_2,....x\_m)$ to be the polynomial $p(x\_1,x\_2,...x\_m)$ which is obtained by arithmetizing $\backslash phi$. Now in order to keep the degree of the polynomial low, we define $f\_i$ in terms of $f\_\{i+1\}$: $If\backslash \; S\_\{i+1\}\; =\; \backslash forall,\; f\_i(a\_1,a\_2,....a\_i)\; =\; f\_\{i+1\}(a\_1,a\_2,....a\_i,0)\backslash cdot\; f\_\{i+1\}(a\_1,a\_2,....a\_i,1)$ $If\backslash \; S\_\{i+1\}\; =\; \backslash exists,\; f\_i(a\_1,a\_2,....a\_i)\; =\; f\_\{i+1\}(a\_1,a\_2,....a\_i,0)\; *\; f\_\{i+1\}(a\_1,a\_2,....a\_i,1)$ $If\backslash \; S\_\{i+1\}\; =\; R,\; f\_i(a\_1,a\_2,....a\_i,a)\; =\; (1-a)f\_\{i+1\}(a\_1,a\_2,....a\_i,0)\; +\; a\; f\_\{i+1\}(a\_1,a\_2,....a\_i,1)$ Now we can see that the reduction operation R, doesn't change the degree of the polynomial. Also it is important to see that the $R\_x$ operation doesn't change the value of the function on boolean inputs. So $f\_0$ is still the truth value of $\backslash psi$, but the $R\_x$ value produces a result that is linear in x. Also after any $Q\_ix\_i$ we add $R\_\{x\_1\}...R\_\{x\_i\}$ in $\backslash psi\text{'}$ in order to reduce the degree down to 1 after arithmetizing $Q\_i$. Now let's describe the protocol. If $n$ is the length of $\backslash psi$, all arithmetic operations in the protocol are over a field of size at least $n^4$ where $n$ is the length of $\backslash psi$. * '''Phase 0''': $P\; \backslash rightarrow\; V$: P sends $f\_0$ to V. V checks that $f\_0\; =\; 1$ and rejects if not. * '''Phase 1''':

$P\; \backslash rightarrow\; V$: P sends $f\_1(z)$ to V. V uses coefficients to evaluate $f\_1(0)$ and $f\_1(1)$. Then it checks that the polynomial's degree is at most $n$ and that the following identities are true: *$f\_\{0\}()\; =\; \backslash begin\{cases\}\; \backslash \; f\_\{1\}(0)\backslash cdot\; f\_\{1\}(1)\; ~if~\; S\; =\; \backslash forall\; \backslash \backslash \; f\_\{1\}(0)\; *\; f\_\{1\}(1)\; ~if~\; S\; =\; \backslash exists.\; \backslash end\{cases\}$ * $f\_\{0\}()\; =\; (1-r)f\_\{1\}(0)\; +\; rf\_\{1\}(1)\; ~if~\; S\; =\; R.$ If either fails then reject. * '''Phase i''': $P\; \backslash rightarrow\; V$: P sends $f\_i(r\_1,r\_2...r\_\{i-1\},z)$ as a polynomial in $z$. $r\_1$ denotes the previously set random values for $r\_1,r\_2...r\_\{i-1\}$ V uses coefficients to evaluate $f\_i(r\_1,r\_2...r\_\{i-1\},0)$ and $f\_i(r\_1,r\_2...r\_\{i-1\},1)$. Then it checks that the polynomial degree is at most $n$ and that the following identities are true: * $f\_\{i-1\}(r\_1,r\_2...r\_\{i-1\})\; =\; \backslash \{\; f\_\{i\}(r\_1,r\_2...r\_\{i-1\},0)\backslash cdot\; f\_\{i\}(r\_1,r\_2...r\_\{i-1\},1)\; ~if~\; S\; =\; \backslash forall.$ * $f\_\{i\}(r\_1,r\_2...r\_\{i-1\},0)\; *\; f\_\{i\}(r\_1,r\_2...r\_\{i-1\},1)\; ~if~\; S\; =\; \backslash exists.$ * $f\_\{i-1\}(r\_1...r)\; =\; (1-r)f\_\{i\}(r\_1,r\_2...r\_\{i-1\},0)\; +\; rf\_\{i\}(r\_1,r\_2...r\_\{i-1\},1)\; ~if~\; S\; =\; R.$If either fails then reject. $V\; \backslash rightarrow\; P$: V picks a random $r$ in $F$ and sends it to P. (If S=R then this $r$ replaces the previous $r$). Goto phase i+1 where P must persuade V that $f\_i(r\_1,...,r)$ is correct. * '''Phase k+1''':

V evaluates $p(r\_1,r\_2,...,r\_m)$. Then it checks if $p(r\_1,r\_2,...,r\_m)\; =\; f\_k(r\_1,r\_2,....r\_m)$ If they are equal then V accepts, otherwise V rejects. This is the end of the protocol description. If $\backslash psi$ is true then V will accept when P follows the protocol. Likewise if $\backslash tilde\{P\}$ is a malicious prover which lies, and if $\backslash psi$ is false, then $\backslash tilde\{P\}$ will need to lie at phase 0 and send some value for $f\_0$. If at phase i, V has an incorrect value for $f\_\{i-1\}(r\_1,...)$ then $f\_i(r\_1,...0)$ and $f\_i(r\_1,...1)$ will likely also be incorrect, and so forth. The probability for $\backslash tilde\{P\}$ to get lucky on some random $r$ is at most the degree of the polynomial divided by the field size: $n/n^4$. The protocol runs through $O(n^2)$ phases, so the probability that $\backslash tilde\{P\}$ gets lucky at some phase is $\backslash leq\; \backslash frac\{1\}\{n\}$. If $\backslash tilde\{P\}$ is never lucky, then V will reject at phase k+1. Since we have now shown that both '''IP''' $\backslash subseteq$ '''PSPACE''' and '''PSPACE''' $\backslash subseteq$ '''IP''', we can conclude that '''IP''' = '''PSPACE''' as desired. Moreover, we have shown that any '''IP''' algorithm may be taken to be public-coin, since the reduction from '''PSPACE''' to '''IP''' has this property. $\backslash Box$ == Variants == There are a number of variants of '''IP''' which slightly modify the definition of the interactive proof system. We summarize some of the more well-known ones here. === MIP === ''Main article: [[Interactive proof system#MIP]]'' In 1988, Goldwasser et al. created an even more powerful interactive proof system based on '''IP''' called '''MIP''' in which there are ''two'' independent provers. The two provers cannot communicate once the verifier has begun sending messages to them. Just as it's easier to tell if a criminal is lying if he and his partner are interrogated in separate rooms, it's considerably easier to detect a malicious prover trying to trick the verifier if there is another prover it can double-check with. In fact, this is so helpful that Babai, Fortnow, and Lund were able to show that '''MIP''' = '''NEXPTIME''', the class of all problems solvable by a [[nondeterministic Turing machine|nondeterministic]] machine in ''exponential time'', a very large class. Moreover, all languages in '''NP''' have zero-knowledge proofs in an '''MIP''' system, without any additional assumptions; this is only known for '''IP''' assuming the existence of one-way functions. === IPP === '''IPP''' (''unbounded IP'') is a variant of '''IP''' where we replace the '''[[BPP]]''' verifier by a '''[[PP (complexity)|PP]]''' verifier. More precisely, we modify the completeness and soundness conditions as follows: * '''Completeness''': if a string is in the language, the honest verifier will be convinced of this fact by an honest prover with probability at least 1/2. * '''Soundness''': if the string is not in the language, no prover can convince the honest verifier that it is in the language, except with probability less than 1/2. Although '''IPP''' also equals '''PSPACE''', '''IPP''' protocols behaves quite differently from '''IP''' with respect to [[oracle machine|oracles]]: '''IPP'''='''PSPACE''' with respect to all oracles, while '''IP''' ≠ '''PSPACE''' with respect to almost all oracles. R. Chang, B. Chor, Oded Goldreich, J. Hartmanis, J. HÃ¥stad, D. Ranjan, and P. Rohatgi. [http://citeseer.ist.psu.edu/chang97random.html The random oracle hypothesis is false]. ''Journal of Computer and System Sciences'', 49(1):24-39. 1994. === QIP === '''QIP''' is a version of '''IP''' replacing the '''[[BPP]]''' verifier by a '''[[BQP]]''' verifier, where '''BQP''' is the class of problems solvable by [[quantum computer]]s in polynomial time. The messages are composed of qubits.J. Watrous. [http://citeseer.ist.psu.edu/watrous99pspace.html PSPACE has constant-round quantum interactive proof systems]. ''Proceedings of IEEE FOCS'99'', pp. 112-119. 1999. It is not yet known if '''QIP''' strictly contains '''IP''' (that is, whether quantum computation adds power to interactive proofs), but it is known that '''QIP''' = '''QIP'''[3], so that more than three rounds are never necessary. Also, '''QIP''' is contained in '''[[EXPTIME]]'''.A. Kitaev and J. Watrous. [http://www.cpsc.ucalgary.ca/~jwatrous/papers/qip2.ps Parallelization, amplification, and exponential time simulation of quantum interactive proof systems]. ''Proceedings of ACM STOC'2000'', pp. 608-617. 2000. === compIP === Whereas '''IPP''' and '''QIP''' give more power to the verifier, a '''compIP''' system (''competitive IP proof system'') weakens the completeness condition in a way that weakens the prover: * '''Completeness''': if a string is in the language ''L'', the honest verifier will be convinced of this fact by an honest prover with probability at least 2/3. Moreover, the prover will do so in probabilistic polynomial time given access to an oracle for the language ''L''. Essentially, this makes the prover a '''[[BPP]]''' machine with access to an oracle for the language, but only in the completeness case, not the soundness case. The concept is that if a language is in '''compIP''', then interactively proving it is in some sense as easy as deciding it. With the oracle, the prover can easily solve the problem, but its limited power makes it much more difficult to convince the verifier of anything. In fact, '''compIP''' isn't even known or believed to contain '''[[NP (complexity)|NP]]'''. On the other hand, such a system can solve some problems believed to be hard. In can easily solve all '''[[NP-complete]]''' problems due to self-reducibility. Additionally, our earlier proof that graph nonisomorphism is in '''IP''' also shows that it is in '''compIP''', since the only hard operation the prover ever does is isomorphism testing, which it can use the oracle to solve. Quadratic non-residuosity and graph isomorphism are also in '''compIP'''. Shafi Goldwasser and Mihir Bellare. [http://www.cs.ucsd.edu/users/mihir/papers/compip.pdf The Complexity of Decision versus Search]. ''SIAM Journal on Computing'', Volume 23, No. 1. February 1994. == Additional Sources == * Babai, L. Trading group theory for randomness. In Proceedings of the 17th ACM Symposium on the Theory of Computation . ACM, New York, 1985, pp. 421-429. * Shafi Goldwasser, Silvio Micali, and Charles Rackoff. [http://portal.acm.org/citation.cfm?id=63434 The Knowledge complexity of interactive proof-systems]. ''Proceedings of 17th ACM Symposium on the Theory of Computation'', Providence, Rhode Island. 1985, pp. 291-304. [http://theory.lcs.mit.edu/~cis/pubs/shafi/1985-stoc.pdf Extended abstract] * Shafi Goldwasser and Michael Sipser. [http://theory.lcs.mit.edu/~cis/pubs/shafi/1986-stoc.pdf Private coins versus public coins in interactive proof systems]. ''Proceedings of the 18th Annual ACM Symposium on Theory of Computation''. ACM, New York, 1986, pp. 59-68. * Lund, C., Fortnow, L.. Karloff, H., Nisan, N. Algebraic methods for interactive proof systems. In Proceedings of 31st Symposium on the Foundations of Computer Science. IEEE, New York, 1990, pp. 2-90. * Adi Shamir. [http://portal.acm.org/citation.cfm?doid=146585.146609 IP = PSPACE]. ''Journal of the ACM'', volume 39, issue 4, p.869-877. October 1992. * Sipser, Michael. "Introduction to the Theory of Computation", Boston, 1997, pg. 392-399. * Complexity Zoo: [http://qwiki.caltech.edu/wiki/Complexity_Zoo#ip IP], [http://qwiki.caltech.edu/wiki/Complexity_Zoo#mip MIP], [http://qwiki.caltech.edu/wiki/Complexity_Zoo#ipp IPP], [http://qwiki.caltech.edu/wiki/Complexity_Zoo#qip QIP], [http://qwiki.caltech.edu/wiki/Complexity_Zoo#qip2 QIP(2)], [http://qwiki.caltech.edu/wiki/Complexity_Zoo#compip compIP], [http://qwiki.caltech.edu/wiki/Complexity_Zoo#frip frIP]