# Good characterizations and perfect graphs

A good characterization has both a short proof and a short refutation (i.e., belongs to $NP \cap coNP$). Jack Edmonds is a big believer in good characterizations, and he and fellow proselytizer Václav Chvátal did a lot of preaching. For the record, I support the hype. Good characterizations give an unreasonable amount of insight into a problem.

Good characterizations often foreshadow polynomial-time solvability. Linear programming (LP) is a famous example. Long before the ellipsoid method was shown to solve LPs in polynomial time, it was known that the question — is the optimal objective of this (maximization) LP at least $z$? — has both a short proof and short refutation. Given a possible solution, one can quickly check that it satisfies the constraints and that its objective is indeed at least $z$. For the short refutation, we have duality; simply provide a feasible solution to the dual problem that has objective less than $z$.

The maximum clique and the vertex coloring problems have a similar sort of weak ‘duality’. That is, the clique number $\omega (G)$ and chromatic number $\chi (G)$ of a graph $G$ satisfy $\omega (G) \le \chi (G)$. For LPs there is strong duality; the objectives of the primal and dual LPs are equal whenever both are feasible. For clique and coloring, the ‘duality’ can be very weak. Indeed, there is a class of graphs for which the clique number is two, but the chromatic number can be arbitrarily large. Thus, we may not be able to refute the question — is the clique number at least $k$? — with a proper coloring. It seems that there will never be such a good characterization for either clique or coloring (unless coNP=NP).

But what happens if we restrict ourselves to the class of graphs for which $\omega (G) = \chi (G)$? In this case, we DO have a sort of strong duality. We can now quickly verify that $\omega (G) \ge k$ when given a clique of size $k$, and we can refute $\chi (G) \ge k$ if given a $(k-1)$-coloring. So, we have our good characterization, and you know what? For this restricted class of graphs, we CAN solve maximum clique in polynomial time! If we further restrict ourselves to the class of perfect graphs, where $\omega (G')=\chi(G')$ for every vertex-induced subgraph $G'$ of $G$, then all sorts of mathematical beauty emerge.

This leads me to the research topic generator: First look for a good characterization. If you can’t find one, artificially impose it and see what happens.

# Jon Lee on the ‘House’ of Combinatorial Optimization

This is the house that Jack built. Ralph prepared the lot. There were many independent contractors who did beautiful work; some putting on splendid additions. Martin, Laci, and Lex rewired the place. The work continues. But this is the house that Jack built.

-Jon Lee, in the preface of A First Course in Combinatorial Optimization

# On false proofs of P=NP, or how not to use induction

The power set of an $n$-set has $2^n$ elements. This isn’t that complicated. However, mimicking some proofs purporting to show P=NP, I can (incorrectly!) show that the power set grows polynomially in $n$. Heck, I could even show that the power set has a constant number of elements.

It is not hard to see that the number $T(n)$ of elements of the power set of an $n$-set satisfies the recurrence $T(n)=2T(n-1)$. I’d like to claim that $T(n)$ grows polynomially in $n$. In the base case $T(1)=2$. Good so far. Now for the inductive step, suppose that the number of elements of the power set is polynomial for $n$. Then $T(n+1)=2T(n)=2n^{O(1)}=(n+1)^{O(1)}$.  QED.

Of course there’s a huge problem in that proof, but that hasn’t stopped others from publishing essentially the same thing. For example, see A polynomial-time algorithm for the maximum clique problem. For those looking to find mistakes in purported P versus NP proofs, here’s a treasure trove.

# Great papers: The Complexity of Facets (and Some Facets of Complexity)

I’ve decided to read and summarize one great paper a week (or month, or semester…) from this list. This week’s paper is the 1984 paper The Complexity of Facets (and Some Facets of Complexity) by Christos H. Papadimitriou and Mihalis Yannakakis. Also see an earlier and shorter version of the paper in the proceedings from STOC ’82.

Summary:

Papadimitriou and Yannakakis consider a particular class of problems called DP. Problems in this class have two parts: one part belongs to NP and the other belongs to coNP. The authors show that a number of interesting problems are complete for the class DP.

An example of a DP problem is: given a graph is the size of its maximum clique exactly $k$? The NP part is — is the maximum clique size at least $k$? (This belongs to NP because, given a clique of size $k$, one can easily confirm that it is a solution.) The coNP part is — is the maximum clique size at most $k$? (This belongs to coNP since the claim “the maximum clique size is at most $k$” can be refuted by providing a larger clique.) This EXACT CLIQUE problem turns out to be complete for the class DP.

Several notable types of DP problems are mentioned:

1. Facets. For example, given an inequality and a graph, does the inequality induce a facet of the graph’s clique polytope? The NP part is to provide the proper number of affinely independent integer feasible points that satisfy the inequality at equality, and the coNP part is to verify that the inequality is actually valid.
2. Critical problems. For example, is a graph maximal nonHamiltonian (meaning it has no Hamiltonian cycle, but the addition of any edge makes it Hamiltonian)?
3. Exact problems. For example, is the maximum clique size exactly $k$?
4. Unique solution problems. For example, does a given boolean formula have exactly one satisfying assignment?

The majority of the paper is spent providing reductions to show that many problems are DP-complete. Of all the problems mentioned, these may be of the most interest to the OR community:

1. TSP FACETS. Does a given inequality induce a facet of the TSP polytope?
2. CLIQUE FACETS. Does a given inequality induce a facet of clique polytope of a given graph?
3. EXACT TSP. Is the optimal TSP tour of length exactly $k$?
4. TSP Supporting Hyperplane. Is a given inequality valid and does it have nonempty intersection with the TSP polytope?
5. Critical Integer Programming. Given a system of linear inequalities, is it true that it has no integer solution, but the removal of just one constraint makes the system integer feasible?

Another problem — Interior Points — is shown to be NP-complete. This problem asks: given a point and a polyhedron, does the point belong to the polyhedron’s integer hull?

Next up: J. Edmonds, Paths, Trees, and Flowers, Canadian Journal of Mathematics 17 (1965), 449.

# Scott Aaronson and “Quantum Computing since Democritus”

I’ve been coming across a lot of Scott Aaronson‘s writing lately, namely:

I enjoyed what I read, so I picked up his book from earlier this year entitled Quantum Computing since Democritus. I’ve only read the preface so far, and a few things that he said stuck out.

He remarks that when he was preparing lectures (that eventually became the book), that to him “it was self-evident that the human brain is nothing other than a ‘hot, wet Turing machine.’” He tries to make other connections between computational complexity theory and “the big questions”:

…computational complexity lets us systematically take ‘deep philosophical mysteries’ about the limits of knowledge, and convert them into ‘merely’ insanely difficult unsolved mathematical problems…[such as the P versus NP problem]

Here’s a presentation that he gave for TEDxCaltech on computational complexity with a Richard Feynman theme.

# A minimal version of Stewart’s cube

Via John D. Cook:

This is a drawing of Stewart’s cube.  At each corner, the sum of the weights of the incoming edges is 83. Each edge has a prime weight, and all the weights are distinct.

I wondered if Stewart’s cube achieved its structure in the cheapest way. I considered two objectives: (1) minimize the sum of the edges incident to a vertex, and (2) minimize the weight of the worst edge. In both cases, the following cube is optimal.

It improves the first objective from 83 to 77, and the second objective from 61 to 53. To solve the problems, I modeled them as integer programs, and used Gurobi as solver. I’m currently running code for higher-dimensional cubes.

/* EDIT */

Gurobi had problems with higher-dimensional cubes. The LP relaxation is not very good, and my formulation had a lot of symmetry.

# Great papers: The ellipsoid method and its consequences in combinatorial optimization

I’ve decided to read and summarize one great paper a week (or month…) from this list. This week’s paper is the 1981 paper The ellipsoid method and its consequences in combinatorial optimization by Martin Grötschel, László Lovász, and Alexander Schrijver. Also see the corrigendum. Jeff Linderoth gives a nice overview in his lecture slides.

Summary:

Of all the papers on this list, this may be my favorite. Its profoundness is staggering—illustrating that “separation = optimization.” It noted that linear programs with exponentially many constraints can be solved in polynomial time. All that is needed is the ability to solve the separation problem in polytime. The separation problem asks: Given $x\in \mathbb{Q}^n$, does $x$ satisfy all constraints? If not, return a valid inequality that $x$ violates. What if the linear program has exponentially many variables, but polynomially many constraints? Take the dual!

This approach gives a polynomial time algorithm to optimize over the subtour elimination polytope for TSP, even though there are exponentially many constraints. Jeff Linderoth, in his lecture slides, claims that this is the only known polytime algorithm for this task. Michael Trick confirms this here.

Unfortunately, the ellipsoid method has practical limitations. But, as the authors state:

The main point is that the ellipsoid method proves the polynomial solvability of a large number of different combinatorial optimization problems at once, and hereby points out directions for the search for practically feasible polynomial algorithms.

We usually think of linear programming as the problem of optimizing a linear objective over a polyhedron. This turns out to be equivalent to the feasibility problem—just find a point within the polyhedron—which is how the ellipsoid method is usually described. We assume that the polyhedron $P$ is bounded, and encapsulate it with an ellipsoid $E_0$. Subsequently, test if the center $x^0 \in E_0$ of the ellipsoid is feasible, i.e., $x^0 \in P$. If the point is feasible, then we are done. Otherwise, find a valid inequality $\pi x \le\pi_0$ that $x$ violates. Push the inequality $\pi x \le \pi_0$ until it reaches $x^0$. Now we encapsulate the half-ellipsoid $H_0=E_0 \cap \{x : \pi x \le \pi_0\}$ with a new ellipsoid $E_1 \supset H_0$. Then, repeat this process, generating a sequence $E_0, E_1, E_2, \dots$ of smaller and smaller ellipsoids. Either we find a feasible point along the way, or the ellipsoids get so small that we declare that $P=\emptyset$ (within tolerance).

“Think outside the polyhedron.” -Ellipsoid method slogan. #AlgorithmSlogans #orms

— Austin Buchanan (@austinisi) June 25, 2013

The following theorem is proven for general convex bodies, and then fine-tuned for bounded rational polyhedra.

Let $\mathcal{K}$ be a class of convex bodies. There is a polynomial algorithm to solve the separation problem for the members of $\mathcal{K}$, if and only if there is a polynomial algorithm to solve the optimization problem for the members of $\mathcal{K}$.

In subsequent sections, they note consequences for matroid intersection, perfect matchings, $b$-matchings, branchings, and for minimizing submodular functions. They then apply their knowledge for non-polytopal convex sets, developing polynomial algorithms for problems in perfect graphs. The ellipsoid method, along with Lovász theta function, provide polytime algorithms for max-weight-clique, max-weight-independent-set, min-weight-coloring, and min-weight-clique-cover in perfect graphs.

Finally, they use the equivalence of separation and optimization to prove some negative results—that some problems are NP-hard. When I first read this paper several months ago, I was extremely confused when I read that

…we note that the vertex-packing problem of a graph is in a sense equivalent to the fractional chromatic number problem, and comment on the phenomenon that this latter problem is an example of a problem in NP which is NP-hard but (as for now) not known to be NP-complete.

Kaveh Ghasemloo, a Computer Science Ph.D. candidate supervised by the famous Stephen Cook, clears things up here.

Next up: C. Papadimitriou and M. Yannakakis, The Complexity of Facets (and Some Facets of Complexity), Journal of Computer and System Sciences 28 (1984), 244.