The g Factor: Relating Distributions on Features to Distributions on
Images
James M. Coughlan and Alan L. Yuille
Smith-Kettlewell Eye Research Institute
We introduce the g-factor which relates probability distributions on
features to distributions on images. It arises when we seek to learn
distributions from image data but {\it it depends only on our choice
of features and lattice quantization} and is independent of the
training image data. We show that simple, and plausible,
approximations of the $g$-factor can throw light on aspects of
Minimax Entropy Learning (MEL) \cite{Zhu97}, which learns probability
distributions on images in terms of Markov Random Fields with clique
potentials. Analyzing the $g$-factor allows us to determine when the
clique potentials decouple for different features. Moreover, when the
approximations of the $g$-factor are valid then the clique potentials
in MEL can be computed analytically. Finally, we describe ways to
extend these approximations by computing approximations to the
$g$-factor offline, thereby enabling rapid methods for computing the
clique potentials from new image data. Overall, we seek to give
understanding of how MEL relates to alternative methods of learning on
images. (In this paper the features we are considering will be
extracted from the image by filters -- hence we almost always use the
terms ``features'' and ``filters'' synonymously.)