HOME MISSION AND RESEARCH PUBLICATIONS HISTORY PEOPLE SEMINARS COURSES VIDEO ARCHIVE CONTACT

Thomas Dean
Brown University / Google

Learning Invariant Features Using Inertial Priors, or "Why Google might want to be in the neocortex business?"

Tuesday 28th of November 2006 at 12:00pm
3105 Tolman Hall (Beach Room)

We address the technical challenges involved in combining key features from several theories of the visual cortex in a single computational model. The resulting model is a hierarchical Bayesian network factored into modular component networks implementing variable-order Markov models. Each component network has an associated receptive field corresponding to components in the level directly below it in the hierarchy. The variable-order Markov models account for features that are invariant to naturally occurring transformations in their inputs. These invariant features support efficient generalization and produce increasingly stable, persistent representations as we ascend the hierarchy. The receptive fields of proximate components on the same level overlap to restore selectivity that might otherwise be lost to invariance. Technical jargon aside, we believe there is enough known about the primate cortex to enable engineers to build systems that approach the pattern-recognition capability of human vision. Moreover, we believe that such a capability can be implemented using the distributed computing infrastructure that Google has today.
(video)


Join Email List

You can subscribe to our weekly seminar email list by sending an email to majordomo@lists.berkeley.edu that contains the words subscribe redwood in the body of the message.
(Note: The subject line can be arbitrary and will be ignored)