What makes an optimization problem difficult?

…there is always an easy solution to every human problem — neat, plausible, and wrong.
-H. L. Mencken, The Divine Afflatus, in A Mencken Chrestomathy

What makes an optimization problem difficult? I don’t really have the right answer, but here are some answers that are neat, plausible, and wrong.

 

Wrong answer #1: optimization problems are difficult when there are many feasible solutions.

Why not? This may be the most common misconception. The traveling salesman problem is (thought to be) a difficult problem, and it also has exponentially many feasible solutions. Sometimes this leads people to think that it is hard BECAUSE there are so many possible choices.  Of course the brute force algorithm will not work well in this case, but that’s not the only option. If this were the case, then the minimum spanning tree problem would be hard (there are n^{n-2} spanning trees in a complete graph on n vertices). Linear programs provide a more extreme example; they can have infinitely many feasible solutions. (Mike Trick also has a blog post on this.)

 

Wrong answer #2: optimization problems are difficult when there are many constraints.

Why not? There are many problems that have exponentially many constraints (or more!) but can be solved in polynomial time. This is the case for linear programs that admit efficient separation algorithms, where the ellipsoid method provides a (theoretically, but perhaps not practically) efficient algorithm. For example, consider optimizing over the subtour elimination relaxation for the traveling salesman problem. There are \Theta (2^n) subtour elimination constraints, but they can be separated in polynomial time using a min-cut algorithm. (Moreover, there are polynomial-size extended formulations for the subtour elimination polytopes.)

 

Wrong answer #3: optimization problems are difficult when there is no small linear program that represents its feasible solutions.

Why not? A counterexample to the claim is the matching problem. This problem is polynomial-time solvable, but its usual polyhedral representation admits no small linear programming formulation. Perhaps this belief is caused by a mistaken interpretation of the fact that linear programming is P-complete. Note that P-hardness is defined with respect to a more restrictive type of reduction than NP-hardness. The former is usually defined with respect to log-space reductions.

 

Wrong answer #4: optimization problems are difficult when there are many variables.

Why not? Consider a linear program that has exponentially many constraints, but that can be solved using the ellipsoid method and an efficient separation procedure. Its dual LP has exponentially many variables, but is polynomial-time solvable.

 

Wrong answer #5: optimization problems are difficult when the constraints/objective are nonlinear.

Why not? In the words of Tyrrell Rockafellar “the great watershed in optimization isn’t between linearity and nonlinearity, but convexity and nonconvexity.” Indeed, the general class of convex optimization problems have nice properties that, in many cases, lead to efficient algorithms–even when they are nonlinear.

 

Wrong answer #6: optimization problems are difficult when they are nonconvex.

Why not? One counterexample to the claim is the problem of optimizing a quadratic function subject to an ellipsoid constraint. This problem is nonconvex, but is polynomial-time solvable.

 

 

Some of these wrong answers are appealing, partially because their converses are true. For example, the converses of wrong answers #2 and #4 are true for integer programs. Indeed, while integer programming is (thought to be) hard in general, it becomes polynomial-time solvable when we restrict ourselves to instances having a fixed number of variables (or with a fixed number of constraints).

(I might add that the converse of wrong answer #1 does not seem to hold. For example, it is thought to be hard to find a second Hamiltonian cycle in a particular class of graphs even when it is guaranteed to exist! Moreover, we can quickly check whether a number is prime, but actually factoring a composite number is (thought to be) hard for classical computers.)

The best answer I can give is that optimization problems are hard when they are sufficiently expressive*. This is essentially what we are showing when we prove that a problem is NP-hard—we are showing that we can express any problem from NP as an equivalent instance of our problem.

Of course, using NP-hardness as a justification for the hardness of a problem depends on NP actually having difficult problems in it. (I think it does.) At the very least, we can say that NP-hard optimization problems are “hard-ass”—Al Meyer’s creative abbreviation of “hard as satisfiability.”

___________

*Admittedly, this is sort of a non-answer. I think linear programming is pretty expressive, but it is polynomial-time solvable. So we should set the bar higher for being “sufficiently expressive.” If “sufficiently expressive” means “is able to express hard problems,” then this is a tautology. Any hard problem would be “sufficiently expressive” since it can be used to express instances of itself.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s