Pentti Kanerva: Difference between revisions
No edit summary |
No edit summary |
||
Line 4: | Line 4: | ||
E-mail: | E-mail: | ||
pkanerva | pkanerva . . . hopefully | ||
@ | @csli.stanford . . . this will | ||
. | .edu . . . discourage spam | ||
SELECTED PUBLICATIONS | SELECTED PUBLICATIONS |
Revision as of 17:44, 7 April 2009
Pentti Kanerva
Visiting Scholar Redwood Center for Theoretical Neuroscience
E-mail:
pkanerva . . . hopefully @csli.stanford . . . this will .edu . . . discourage spam
SELECTED PUBLICATIONS http://www.rni.org/kanerva/pubs.html
RESEARCH STATEMENT
The organization of the brain in large circuits of neurons is compelling. To a computer engineer it means but one thing: the circuits are there to accomplish computation. My research is aimed at understanding the nature of this computation and eventually finding out how our brains make us what we are.
Present-day computers are a pale model of the brain's computing. Sensory input to the brain and the brain's circuits are far more complex than those of computers, the architecture is not specified in minute detail as it is in computers, and the components are unreliable. The most remarkable differences, however, are in the behaviors that brains and computers produce. These conspicuous differences reflect fundamental differences in the internal form of information and operations on it, which is what we need to understand.
Brains are a product of evolution and should be studied in that context. They help individuals and species to survive and prosper in the world, meaning that they produce beneficial action. They do it by predicting events in the world, including consequences of their own actions. The brain's computing is designed for interaction.
In their own way brains learn to model their interaction with the world. They convert and integrate sensory signals and motor commands into common internal form, a kind of ``universal code, and employ learning mechanisms that are shared by different sensory modalities. My research is concerned with this encoding and integration of information into a predictive model of the world: What are the neural algorithms of memory and learning and how do they capture statistical and logical regularities in the signal available to the brain? How do brains find meaning it what they sense?
I use random distributed representation in spaces with thousands of dimensions---i.e., population coding and computing with large, seemingly random patterns---to model the brain's code and computing. Reasons for this are several. The brain's circuits are large with no individual neuron being critical to their operation (neurons can die), even the simplest mental events involve the activity of thousands of neurons, distributed representation is lifelike and tolerant of component failure, and randomizing leads to general algorithms that do not depend on precise architecture. The simplest models of neural circuits are based on high-dimensional vectors---that is, on points of a high-dimensional space, or large patterns, with the dimensionality in the thousands.
Although not immediately obvious, spaces with thousands of dimensions have rich and subtle mathematical properties on which to base computation. For example, high-dimensional representation makes a system tolerant of ``errors---it makes it possible for us to recognize people and objects even when conditions vary. For another example, the human mind works by analogy. The mapping of points in a high-dimensional space could be its underlying mechanism, where the mapping functions themselves are represented by points of the same space. Although dimensionality in the range of 10--100 may well be a curse (referred to as ``the curse of dimensionality), when it's in the thousands it is truly a blessing.
PROFESSIONAL ACTIVITIES
Member European Academy of Sciences Cognitive Science Society International Neural Network Society Visiting Scholar Tampere University of Technology, 1990. CSLI, Stanford University, 1994-- Co-chair, AAAI Symposium on Acquiring and Using Linguistic and World Knowledge for Information Access, March 2002. Publications: One book, two book chapters, and 25 papers on distributed representation and memory.
BACKGROUND
Education Ph.D. Philosophy, Stanford University, 1984. M.S. Forestry, minor in mathematics and statistics, University of Helsinki, 1964. A.A. Warren Wilson College, North Carolina, 1956. Employment 2002--03 Senior Researcher, Redwood Neuroscience Institute. 1993--2002 Senior Researcher, Swedish Institute of Computer Science (SICS). 1985--92 Senior Scientist, Research Institute for Advanced Computer Science (RIACS), NASA Ames Research Center. 1984--85 Postdoctoral Fellow, Center for the Study of Language and Information (CSLI), Stanford University. 1979--83 Computer Systems Specialist, Center for Information Technology, Stanford University. 1967--78 Systems Programmer (1967--68, 1977--78) and Research Assistant (1968--77), Institute for Mathematical Studies in the Social Sciences (IMSSS), Stanford University. 1965--67 Chief of Computer Center, University of Tampere, Finland. 1963--65 Programmer--Analyst and Section Leader, Finnish State Computer Center. 1961--63 Statistician, part-time, Forest Research Institute of Finland.