[Solved]: Why is the reduction from Vertex-Cover to Subset-Sum of polynomial time?

Problem Detail: In the standard proof why Subset-Sum is (weakly) NP-complete, one reduces Vertex Cover to Subset-Sum by using suitable numbers with O(m+n) bits (where m is the number of edges and n the number of vertices). But how can we talk about a polynomial time reduction if we generate exponential-size numbers? I guess that this is the key why Vertex Cover is strongly NP-complete and Subset-Sum is only weakly NP-complete. But I didn’t get why it is in fact a polynomial time reduction.

Asked By : user1742364

Answered By : Luke Mathieson

The key is indeed that the numbers are expressed in binary. As you observe, the numbers could have a value that is exponential in the size of the vertex cover instance. If the reduction “wrote them out” in unary when computing the Subset-Sum instance, this would take exponential time. However in binary, writing down the bits of the number (i.e. computing their representation in the Subset-Sum instance) only takes polynomial time. Just to emphasise the distinction, it only takes 7 digits to write out 1,000,000 in base ten, so writing it down takes about 7 steps. If we did it in unary, it would take about a million steps. The same thing is happening in the reduction, if it were using unary, it would not be polynomial in the size of the graph, but in binary – because what we’re measuring is the number of steps taken to write it down – it is polynomial.
Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/24078

Leave a Reply