Problem Detail: Assume that I am a programmer and I have an NP-complete problem that I need to solve it. What methods are available to deal with NPC problems? Is there a survey or something similar on this topic?
Asked By : Anonymous
Answered By : Raphael
There are a number of well-studied strategies; which is best in your application depends on circumstance.
- Improve worst case runtime
Using problem-specific insight, you can often improve the naive algorithm. For instance, there are $O(c^n)$ algorithms for Vertex Cover with $c < 1.3$ [1]; this is a huge improvement over the naive $Omega(2^n)$ and might make instance sizes relevant for you tractable. - Improve expected runtime
Using heuristics, you can often devise algorithms that are fast on many instances. If those include most that you meet in practice, you are golden. Examples are SAT for which quite involved solvers exist, and the Simplex algorithm (which solves a polynomial problem, but still). One basic technique that is often helpful is branch and bound. - Restrict the problem
If you can make more assumptions on your inputs, the problem may become easy.- Structural properties
Your inputs may have properties that simplify solving the problem, e.g. planarity, bipartiteness or missing a minor for graphs. See here for some examples of graph classes for which CLIQUE is easy. - Bounding functions of the input
Another thing to look at is parameterised complexity; some problems are solvable in time $O(2^kn^m)$ for $k$ some instance parameter (maximum node degree, maximum edge weight, …) and $m$ constant. If you can bound $k$ by a polylogarithmic function in $n$ in your setting, you get polynomial algorithms. Saeed Amiri gives details in his answer. - Bounding input quantities
Furthermore, some problems admit algorithms that run in pseudo-polynomial time, that is their runtime is bounded by a polynomial function in a number that is part of the input; the naive primality check is an example. This means that if the quantities encoded in your instances have reasonable size, you might have simple algorithms that behave well for you.
- Structural properties
- Weaken the result
This means that you tolerate errorneous or incomplete results. There are two main flavors:- Probabilistic algorithms
You only get the correct result with some probability. There are some variants, most notable Monte-Carlo and Las-Vegas algorithms. A famous example is the Miller-Rabin primality test. - Approximation algorithms
You no longer look for optimal solutions but almost optimal ones. Some algorithms admit relative (“no worse than double the optimum”), others absolute (“no worse than $5$ plus the optimum”) bounds on the error. For many problems it is open how well they can be approximated. There are some that can be approximated arbitrarily well in polynomial time, while others are known to not allow that; check the theory of polynomial-time approximation schemes.
- Probabilistic algorithms
Refer to Algorithmics for Hard Problems by Hromkovič for a thorough treatment.
- Simplicity is beauty: Improved upper bounds for vertex cover by Chen Jianer, Iyad A. Kanj, Ge Xia (2005)
Best Answer from StackOverflow
Question Source : http://cs.stackexchange.com/questions/1477