Mission and Research: Difference between revisions
Line 13: | Line 13: | ||
== Learning == | == Learning == | ||
Learning is | Learning is arguably the central problem in theoretical | ||
neuroscience. It is possible that other problems such as the | neuroscience. It is possible that other problems such as the | ||
understanding of [[#Sparse representation|representations]], | understanding of [[#Sparse representation|representations]], | ||
Line 43: | Line 43: | ||
Breaking the learning problem down, we are focusing on the attempt to | Breaking the learning problem down, we are focusing on the attempt to | ||
learn [[#Invariance|invariant]] forms of coding from sensory | learn [[#Invariance|invariant]] forms of coding from | ||
stimuli (and later, we hope, | [[#Natural scene statistics|sensory stimuli]] (and later, we hope, | ||
[[#Active perception and sensorimotor loops|sensorimotor scenarios]]) | [[#Active perception and sensorimotor loops|sensorimotor scenarios]]) | ||
and on an attempt to explain why changes in neural connection | and on an attempt to explain why changes in neural connection |
Revision as of 02:57, 6 November 2005
Sparse representation
Hierarchical representation and feedback
Natural scene statistics
Invariance
Learning
Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of representations, network dynamics and circuit function may ultimately be best understood through understanding the learning processes that, together with the action of the genome, produce these phenomena.
To solve the problem, it will be necessary to combine the best ideas from statistical machine learning with the cleverest plasticity studies at the synaptic and network level. Utilising the intersection of these two forms of knowledge greatly constrains the search space. This is necessary since twenty years of abstract neural network theory have done little more than, for example, self-organising the correct forms for receptive fields in area V1 of cortex, using a single layer feedforward network of `connectionist' neurons, and an ensemble of natural images.
Our efforts have focused on unsupervised learning using sparse coding principles and information theory. It is controversial whether learning in the brain is unsupervised or reinforcement-based. Although reinforcement undeniably flows from subcortical structures, the framework of reinforcement learning requires a hard-coded, or 'given', reward signal that is external to the operation of the network. This is not the case in the brain considered as a whole.
Breaking the learning problem down, we are focusing on the attempt to learn invariant forms of coding from sensory stimuli (and later, we hope, sensorimotor scenarios) and on an attempt to explain why changes in neural connection strengths depend on the relative timings of incoming and outgoing spikes.
Since the learning algorithms we develop are general multivariate data analysis algorithms, we also use them to analyse (or data-mine) neurophysiological recordings from, for example, EEG, MEG, fMRI and optical imaging techniques, helping neurophysiologists to remove noise and isolate components of brain activity relevant to sensory, perceptual and motor phenomena.
Associative memory
Exploratory data analysis
Single-cell/network/biophysical models
There are two approaches we are following at the physiological level.
The first is modeling of physiological processes. We address how the response properties of neurons, such as synaptic integration and receptive fields, arise based on experimental observations such as the biophysical properties of single neurons, connectivity in the cortex, and in vivo recordings. An important aspect of a physiological neural model is, besides replicating the data it is based on, to come with predictions for responses, e.g. to novel stimuli, that can be verified or rejected in physiological experiments. This way theory can aid to guide the direction of experiments to gain greater understanding of the brain.
The second approach is to bring `neural network' ideas from machine learning down to the membrane level, so that phenomena such as Spike-Timing Dependent Plasticity (the most striking phenomenon in synaptic learning) may be understood as information theoretic or probabilistic optimisations. A massive amount of data has accumulated on the molecular basis of neural plasticity. The time is ripe to integrate it in a theoretical framework. If this framework is correct, we will be able to self-organise networks of spiking neurons, facilitating further studies of sensory coding, circuit dynamics, and the function of associative and sensory-motor loops.
At the Redwood Center we apply theoretical ideas at a range of levels of physiological modeling, from single cell models addressing properties of dendritic summation of synaptic input to large network models looking at responses in the primary visual cortex (V1).
Multiscale interactions and oscillations
Brain activity can be described at various levels of complexity. The neuron level, in which single neurons constitute the fundamental computational units, is the most common level for computational theories of sensory perception. However, some theories of plasticity and learning are formulated on the level of individual synapses, and theories of cognitive functions like decision making and attention operate on the level of neuron populations.
Both the neuron level and the population level are directly accessible to electro-physiological measurements. On the neuron level, activity can be recorded using single or multiple electrodes and is best described in terms of the point process of spike timings of individual cells. The activity of many neurons gives rise to population activity which can be measured in the form of local field potentials or activity in the Electro-Corticogram (ECoG) or Electro-Encephalogram (EEG). This population activity is a continuous signal extended in space and time and often has oscillatory properties.
In addition to the study of individual levels of neural activity, it is crucial to understand how different levels interact: We would like to understand how the spiking activity of individual neurons gives rise to population activity, and how in turn the population activity influences the response properties of individual neurons.
There is an intriguing parallel between multiple nested levels of brain activity and the multi-scale structure of sensory data. Is is conceivable that different scales of structure in sensory data are processed not only at different levels of the cortical hierarchy but also at different levels of brain activity as described in this paragraph.
Active perception and sensorimotor loops
Perception is an active process. During natural vision for example, our eyes are constantly moving even while we are fixating an object. Additionally, active brain processes such as attention always influence the processing of sensory information. Action and perception operate in a loop, called the sensorimotor loop.
Some changes of the sensory input are caused by changes in the outside world while other changes are due to our own actions and the brain has to be able to distinguish between these two possibilities. We have seen that our brain is able to extract invariances from sensory data that correspond to objects in the world and in a theory of active perception these are invariances in sensorimotor space rather than in pure sensory space. We are interested in the learning of these so-called sensorimotor contingencies and how they are used during active perception.