| |
Abstract:
We describe the
g
-factor, which relates probability distributions on image
features to distributions on the images themselves. The
g
-factor
depends only on our choice of features and lattice
quantization
and is independent of the training image data. We illustrate the
importance of the
g
-factor by analyzing how the parameters of Markov Random Field
(i.e. Gibbs or log-linear) probability models of images are
learned from data by maximum likelihood estimation. In
particular, we study homogeneous MRF models which learn image
distributions in terms of clique potentials corresponding to
feature histogram statistics (cf. Minimax Entropy Learning (MEL)
by Zhu, Wu and Mumford 1997 [11]). We first use our analysis of
the
g
-factor to determine when the clique potentials decouple for
different features. Second, we show that clique potentials can be
computed analytically by approximating the
g
-factor. Third, we demonstrate a connection between this
approximation and the Generalized Iterative Scaling algorithm
(GIS), due to Darroch and Ratcliff 1972 [2], for calculating
potentials. This connection enables us to use GIS to improve our
multinomial approximation, using Bethe-Kikuchi [8] approximations
to simplify the GIS procedure. We support our analysis by
computer simulations.
References
[2] J. N. Darroch and D. Ratcliff. ``Generalized Iterative
Scaling for Log-Linear Models''.
The Annals of Mathematical Statistics
. 1972. Vol. 43, No. 5, 1470-1480.
[8] J.S. Yedidia, W.T. Freeman, Y. Weiss, ``Generalized Belief
Propagation.'' In
Proceedings NIPS'00
. 2000.
[11] S.C. Zhu, Y. Wu, and D. Mumford. ``Minimax Entropy
Principle and Its Application to Texture Modeling''.
Neural Computation
. Vol. 9. no. 8. Nov. 1997.
|