Olshausen BA, Millman KJ (2000). Learning sparse codes with a mixture-of-Gaussians
prior.
Advances in Neural Information Processing Systems, 12, Ed.
by S.A. Solla, T.K. Leen, and K.R. Muller, MIT Press, pp. 841-847. (ps.gz
| pdf)
We describe a method for learning an overcomplete set of basis functions
for the purpose of modeling sparse structure in images. The sparsity of
the basis function coefficients is modeled with a mixture-of-Gaussians
distribution. One Gaussian captures non-active coefficients with a small-variance
distribution centered at zero, while one or more other Gaussians capture
active coefficients with a large-variance distribution. We show that when
the prior is in such a form, there exist efficient methods for learning
the basis functions as well as the parameters of the prior. The performance
of the algorithm is demonstrated on a number of test cases and also on
natural images. The basis functions learned on natural images are similar
to those obtained with other methods, but the sparse form of the coefficient
distribution is much better described. Also, since the parameters of the
prior are adapted to the data, no assumption about sparse structure in
the images need be made a priori, rather it is learned from the
data.