We now formally describe our encryption systems. For building them we will assume the existence of a witness encryption scheme for an NP-Complete language $L$ for which there exists a Karp–Levin reduction. We focus on presenting the constructions in this section and defer the proofs to Appendix A.
Other instances of “Karp–Levin” reductions can easily be found elsewhere. From slide 11 of this presentation on PCPs by Mario Szegedy, it seems like a Karp–Levin many-one reduction is just a Levin many-one reduction. Is that correct? Update: To explain why I am uncertain about Yuval Filmus’ answer below, and why I’m asking this question, quoting from page 17-4 of the “Hardness of Approximation” chapter of the Handbook of Approximation Algorithms and Metaheuristics, a chapter written by Mario Szegedy (a Godel prize winner for his contributions to the PCP theorem),
Reduction is perhaps the most useful concept in algorithm design. Interesting, it also turns out to be the most useful tool in proving computational hardness [21-23]. When a problem $A$ Cook reduces to $B$, the hardness of $B$ follows form the hardness of $A$. Unfortunately, Cook reduction does not ensure that if $A$ is hard to approximate then $B$ is hard to approximate. For reducing hardness of approximation new definitions are necessary. Let $F_1(x, y)$ and $F_2(x’, y’)$ be functions that are optimized for y and y’ (maximized or minimized in an arbitrary combination). Let $OPT_1(x)$ and $OPT_2(x’)$ be the corresponding optimums. A Karp–Levin reduction involves two maps:
- a polynomial-time map $f$ to transform instances $x$ of $OPT_1$ into instances $x’ = f(x)$ of $OPT_2$ [Instance Transformation];
- a polynomial-time map $g$ to transform (input, witness) pairs $(x’, y’)$ of $OPT_2$ into witnesses $y$ of $OPT_1$ [Witness Transformation];
Observe that the witness transformation goes from $OPT_2$ to $OPT_1$. Let $opt_1 = OPT_1(x)$, $opt_2 = OPT_2(f(x))$, $appr_1 = F_1(x, g(f(x),y’))$, and $appr_2 = F_2(f(x),y’)$.
So Yuval Filmus’ answer below appears to only cover the “[Instance Transformation]” part of a Karp–Levin reduction, and not the “[Witness Transformation]” part, which seems to be interpretable as a way to recover certificates from some problem $A$ in $NP$ reduced to $B$ in $NP$ provided the number of certificates for $B$. Am I off the mark here? And how is a Karp–Levin reduction different from a Levin reduction? If there is non-standard terminology, or if definitions are supposed to be “understood” depending on whether one is talking about reductions among perhaps counting problems where witness transformations matter, please understand that this is intimidating for someone coming from outside of computer science! People’s patience here is very much appreciated.
Asked By : user26418
Answered By : Yuval Filmus
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/35971 3.2K people like this