The problem of obtaining the maximum a posteriori estimate of a general
discrete Markov random field (i.e., a Markov random field defined using
a discrete set of labels) is known to be NP-hard. However, due to its
central importance in many applications, several approximation
algorithms have been proposed in the literature. In this paper, we
present an analysis of three such algorithms based on convex
relaxations: (i)LP-S: the linear programming (LP) relaxation proposed
by Schlesinger for a special case and independently by Chekuri et al.,
Koster et al., and Wainwright et al. for the general case; (ii) QP-RL:
the quadratic programming (QP) relaxation of Ravikumar and Lafferty;
and (iii) SOCP-MS: the second order cone programming (SOCP) relaxation
first proposed by Muramatsu and Suzuki for two label problems and later
extended by Kumar et al. for a general label set.
We show that the SOCP-MS and the QP-RL relaxations are equivalent.
Furthermore, we prove that despite the flexibility in the form of the
constraints/objective function offered by QP and SOCP, the LP-S
relaxation strictly dominates (i.e., provides a better approximation
than) QP-RL and SOCP-MS. We generalize these results by defining a
large class of SOCP (and equivalent QP) relaxations which is dominated
by the LP-S relaxation. Based on these results we propose some novel
SOCP relaxations which define constraints using random variables that
form cycles or cliques in the graphical model representation of the
random field. Using some examples we show that the new SOCP relaxations
strictly dominate the previous approaches.