We consider the problem of inference in a
graphical model with binary variables. While
in theory it is arguably preferable to compute marginal probabilities, in practice researchers often use MAP inference due to
the availability of efficient discrete optimization algorithms. We bridge the gap between
the two approaches by introducing the Discrete Marginals technique in which approximate marginals are obtained by minimizing
an objective function with unary and pairwise terms over a discretized domain. This
allows the use of techniques originally developed for MAP-MRF inference and learning.
We explore two ways to set up the objective
function - by discretizing the Bethe free energy and by learning it from training data.
Experimental results show that for certain
types of graphs a learned function can outperform the Bethe approximation. We also
establish a link between the Bethe free energy and submodular functions.