Simon Osindero
Department of Computer Science, University of Toronto
A fast learning algorithm for deep belief nets
Tuesday 11th of October 2005 at 04:00pm
5101 Tolman
I will show how "complementary priors" might be used to eliminate the
explaining-away effects that make inference difficult in densely- connected
belief nets that have many hidden layers. Using complementary priors, I
will derive a fast, greedy algorithm that can learn certain types of
deep,directed belief networks one layer at a time, provided the top two
layers form an undirected associative memory. The fast, greedy algorithm
can be used to initialize a slower learning procedure that fine-tunes the
weights using a contrastive version of the wake-sleep algorithm. After
fine-tuning, a network with three hidden layers forms a very good
generative model of the joint distribution of handwritten digit
images and their labels. This generative model gives better classification
performance than discriminative learning algorithms. The low-dimensional
manifolds on which the digits lie are modeled by long ravines in the
free-energy landscape of the top-level associative memory and it is easy
to explore these ravines by using the directed connections to display
what the associative memory has in mind.
Join Email List
You can subscribe to our weekly seminar email list by sending an email to
majordomo@lists.berkeley.edu that contains the words
subscribe redwood in the body of the message.
(Note: The subject line can be arbitrary and will be ignored)