[Solved]: Prove or refute: BPP(0.90,0.95) = BPP

Problem Detail: I’d really like your help with the proving or refuting the following claim: $BPP(0.90,0.95)=BPP$. In computational complexity theory, BPP, which stands for bounded-error probabilistic polynomial time is the class of decision problems solvable by a probabilistic Turing machine in polynomial time, with an error probability of at most $frac{1}{3}$ for all instances. $BPP=BPP(frac{1}{3},frac{2}{3})$. It is not immediate that any of the sets are subset of the other, since if the probability for an error is smaller than $0.9$ it doesn’t have to be smaller than $frac{1}{3}$ and if it is bigger than $frac{2}{3}$ it doesn’t have to be bigger than $0.905$. I’m trying to use Chernoff’s inequality for proving the claim, I’m not sure exactly how. I’d really like your help. Is there a general claim regarding these relations that I can use?

Asked By : Numerator

Answered By : Steven Stadnicki

To expand out my comment into an answer: the multiplicative form of Chernoff’s bound says that if the expectation $X=sum_{i=0}^n X_i$ for the sum of independent random binary variables is $mu$, then the probability of straying ‘too far’ from that expectation goes as: $Pr(Xgt (1+delta)mu) lt left(dfrac{e^delta}{(1+delta)^{(1+delta)}}right)^mu$. Now, imagine a procedure where, given a string $sigma$ to test, we run $n$ trials of our $BPP(0.90, 0.95)$ algorithm (for some $n$ to be chosen later) and accept iff at least $0.925n$ of those trials accept $sigma$. We can use Chernoff’s bound to find the probability of failure in terms of $n$ as follows: Let $X_i$ denote the outcome of the $i$th trial, and thus $X=sum X_i$ the number of trials that succeed. We can assume conservatively that our probability for false positives is $.9$; this means that if we make $n$ independent trials on a string $sigmanotin L$, the expected number of successes is $mu = E(X) = 0.9n$. (Note that a false positive probability less than $.9$ will lead to an even lower expected value and thus to even tighter bounds on the estimates to come.) Now, let’s look at the probability that we have more than $0.925n$ false positives (i.e., that $Xgt 0.925n$). We take $delta = left(frac{0.925}{0.9}right)-1 =frac{1}{36}$; then $left(dfrac{e^delta}{(1+delta)^{(1+delta)}}right) approx.99961lt frac{2999}{3000}$ and so we have $Pr(Xgt 0.925n)ltleft(frac{2999}{3000}right)^{0.9n}$. From here it should be clear that by taking $n$ large enough we can reduce this probability to $ltfrac{1}{3}$. Thus, for this sufficiently large $n$, if we accept the string $sigma$ only if the number of successful trials on $sigma$ is greater than $.925n$, then our probability of accepting a string $sigmanotin L$ drops below $frac{1}{3}$. Note that this $n$ is constant, not dependent on our problem size; since we’re running our polynomial $BPP(0.9, 0.95)$ algorithm a constant number of times, the total running time of our new procedure is still polynomial. A similar analysis going the other direction will show that the probability of a ‘false negative’ (that $Xlt.925n$ for a string that is in our language) will be bounded by $c^n$ for some $c$, and so again we can take $n$ large enough to bound the probability of a false negative by $frac{1}{3}$ (or, in other words, to ensure at least $frac{2}{3}$ probability of accepting on a string $sigmain L$). This shows that $BPP(.9, .95)subseteq BPP(frac13, frac23) equiv BPP$, and Yuval’s comment shows how to prove the reverse equivalence through a similar procedure. Canonically, this is known as probability amplification and it’s an immensely useful method for dealing with probabilistic classes. The specific constants are less important, obviously, then the fact that Chernoff’s bound lets us bound our ‘false result’ probabilities by some exponential function of the number of trials, so they can be made arbitrarily small with only a modest number of trials.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/7820

Leave a Reply