Problem Detail: Given three numbers $m$, $n$ and $p$ in interleaved binary encoding1, it’s obviously possible to check in $O(1)$ space whether $m+n=p$. It’s less obvious2 that it isn’t possible to check in $O(1)$ space whether $mcdot n=p$. I wonder whether one can prove that it isn’t possible to check in $O(log N)$ space3 whether $mcdot n=p$. On the other hand, are there any known non-trivial upper bounds on the space complexity of this problem, like $O(N/log N)$? The problem described above is a simplified version of the “Multiplication decision problem: Is the $k$th bit of the product of $m$ and $n$ a one?” Since this problem is more “powerful” (and also better known), I wonder whether one can show that this problem can’t be decided in $O((log N)^2)$ space. 1. The interleaved binary encoding starts with the lowest significant bit, and allows “leading” zeros for the most significant bits. 2. The proof idea I have in mind would use such an algorithm as building block for a decision procedure of Robinson arithmetic. I call this less obvious, because the fact that Robinson arithmetic is undecidable may be well known, but still remains non-trivial. 3. Here $N$ is the length of the input $m$, $n$ and $p$ in binary encoding.
Asked By : Thomas Klimpel
Answered By : D.W.
You can do it in $O(lg N)$ space, if you’re willing to accept a randomized algorithm with an exponentially small probability of error. In practice, this algorithm is practical and every bit as good as a deterministic algorithm with no possibility of error. The idea is to choose a random $ell$-bit prime $alpha$, reduce $m,n,p$ modulo $alpha$, and then check whether the congruence $mtimes n equiv p pmod{alpha}$ holds modulo $alpha$. This test can be done with $O(ell)$ bits of space (about $3ell$ bits). Moreover, it suffices to take $ell = O(lg N)$. In particular, if you want the probability of error to be at most $1/2^t$, it suffices to take $ell=O(t + lg N)$. The constants hidden by the big-O notation are small (something like 3 or so). Now take $t=100$, and you get a practical algorithm. The probability of error is $1/2^{100}$, which is so small as to be negligible. It’s more likely that you have a cosmic ray bit flip that causes your computation to give the wrong answer than that you get a wrong answer due to bad luck. (And cosmic ray bit flips affect deterministic algorithms, too, so the notion of an algorithm with zero error is not something that can be implemented in the real world; the best we can hope for is for the probability of error to be negligible.) The analysis of this relies upon the prime number theorem. I don’t have time to write out the full analysis right now, but the basic idea is that the algorithm gives the wrong answer only if $alpha$ divides $mn-p$. Since $|mn-p|$ is (at most) $2N$ bits long, there can be at most $2N/ell$ distinct $ell$-bit prime divisors of $mn-p$. Also, by the Prime Number Theorem, there are about $2^{ell}/ell$ possible $ell$-bit primes. Therefore, the probability that the randomly chosen $alpha$ happens to divide $mn-p$ is at most $(2N)/2^{ell}$. Now solve for $ell$, and you get the result I claimed above. Importantly, the probability of error is taken only over the internal coin flips of the algorithm, not over the possible values of $m,n,p$. In other words, even if an adversary chooses the values of $m,n,p$, the probability of error will still be at most $1/2^t$.
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/16469 Ask a Question Download Related Notes/Documents