Mission and Research: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
Line 37: Line 37:
== Learning ==
== Learning ==


Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of [[#Sparse representation|representations]],
Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of [[#Sparse representation|representations]], [[#Multiscale interactions and oscillations|network dynamics]] and [[#Hierarchical representation and feedback|circuit function]] may ultimately be best understood through understanding the learning processes that, together with the action of the genome, produce these phenomena.
[[#Multiscale interactions and oscillations|network dynamics]] and [[#Hierarchical representation and feedback|circuit function]] may ultimately be best understood through understanding the learning processes that, together with the action of the genome, produce these phenomena.


To solve the problem, it will be necessary to combine the best ideas from statistical machine learning with the cleverest plasticity studies at the synaptic and network level. Utilising the intersection of these two forms of knowledge greatly constrains the search space. This is necessary since twenty years of abstract neural network theory have done little more than, for example, self-organising the correct forms for receptive fields in area V1 of cortex, using a single layer feedforward network of `connectionist' neurons, and an ensemble of [[#Natural scene statistics|natural images]].
To solve the problem, it will be necessary to combine the best ideas from statistical machine learning with the cleverest plasticity studies at the synaptic and network level. Utilising the intersection of these two forms of knowledge greatly constrains the search space. This is necessary since twenty years of abstract neural network theory have done little more than, for example, self-organising the correct forms for receptive fields in area V1 of cortex, using a single layer feedforward network of `connectionist' neurons, and an ensemble of [[#Natural scene statistics|natural images]].
Line 44: Line 43:
Our efforts have focused on unsupervised learning using [[#Sparse representation|sparse coding principles]] and information theory. It is controversial whether learning in the brain is unsupervised or reinforcement-based. Although reinforcement undeniably flows from  subcortical structures, the framework of reinforcement learning requires a hard-coded, or 'given', reward signal that is external to the operation of the network. This is not the case in the brain considered as a whole.
Our efforts have focused on unsupervised learning using [[#Sparse representation|sparse coding principles]] and information theory. It is controversial whether learning in the brain is unsupervised or reinforcement-based. Although reinforcement undeniably flows from  subcortical structures, the framework of reinforcement learning requires a hard-coded, or 'given', reward signal that is external to the operation of the network. This is not the case in the brain considered as a whole.


Breaking the learning problem down, we are focusing on the attempt to learn [[#Invariance|invariant]] forms of coding from [[#Natural scene statistics|sensory stimuli]] (and later, we hope, [[#Active perception and sensorimotor loops|sensorimotor scenarios]])
Breaking the learning problem down, we are focusing on the attempt to learn [[#Invariance|invariant]] forms of coding from [[#Natural scene statistics|sensory stimuli]] (and later, we hope, [[#Active perception and sensorimotor loops|sensorimotor scenarios]]) and on an attempt to explain why changes in neural connection strengths depend on the relative timings of incoming and outgoing [[#Single-cell/network/biophysical models|spikes]].
and on an attempt to explain why changes in neural connection strengths depend on the relative timings of incoming and outgoing [[#Single-cell/network/biophysical models|spikes]].


Since the learning algorithms we develop are general multivariate data analysis algorithms, we also use them to analyse (or [[#Exploratory data analysis|data-mine]])  
Since the learning algorithms we develop are general multivariate data analysis algorithms, we also use them to analyse (or [[#Exploratory data analysis|data-mine]])  

Revision as of 03:19, 6 November 2005

Sparse representation

Hierarchical representation and feedback

Natural scene statistics

Invariance

Our mental experience suggests that the brain encodes and manipulates 'objects' and their relationships, but there is no neural theory of how this is done. We recognise, for example, a cup, regardless of its location, orientation, size and other variations such as lighting and partial occlusion. How do brain networks see a cup despite these complicated variations in the image data? How is the invariant part ('cup-ness') encoded separately from the variant part (location etc)?

This is called the invariance problem. It is a 'holy grail' problem of the computer vision community, and we aim to tackle it by fortifying our learning algorithms with insights from the mathematics surrounding the concept of invariance. Invariance may also be seen in motor scenarios, cups being a class of things that we can drink from (what J.J.Gibson called an affordance).

As we ascend the cortical hierarchy from area V1, we find increasingly invariant forms of coding. It is our goal to understand these forms of coding and how they may be learned from natural data. A modest success in this direction is that 'complex cell' receptive fields (oriented and localised contrast sensitive neurons which are invariant to spatial phase) can be learned in this way.

Learning

Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of representations, network dynamics and circuit function may ultimately be best understood through understanding the learning processes that, together with the action of the genome, produce these phenomena.

To solve the problem, it will be necessary to combine the best ideas from statistical machine learning with the cleverest plasticity studies at the synaptic and network level. Utilising the intersection of these two forms of knowledge greatly constrains the search space. This is necessary since twenty years of abstract neural network theory have done little more than, for example, self-organising the correct forms for receptive fields in area V1 of cortex, using a single layer feedforward network of `connectionist' neurons, and an ensemble of natural images.

Our efforts have focused on unsupervised learning using sparse coding principles and information theory. It is controversial whether learning in the brain is unsupervised or reinforcement-based. Although reinforcement undeniably flows from subcortical structures, the framework of reinforcement learning requires a hard-coded, or 'given', reward signal that is external to the operation of the network. This is not the case in the brain considered as a whole.

Breaking the learning problem down, we are focusing on the attempt to learn invariant forms of coding from sensory stimuli (and later, we hope, sensorimotor scenarios) and on an attempt to explain why changes in neural connection strengths depend on the relative timings of incoming and outgoing spikes.

Since the learning algorithms we develop are general multivariate data analysis algorithms, we also use them to analyse (or data-mine) neurophysiological recordings from, for example, EEG, MEG, fMRI and optical imaging techniques, helping neurophysiologists to remove noise and isolate components of brain activity relevant to sensory, perceptual and motor phenomena.

Associative memory

Exploratory data analysis

Single-cell/network/biophysical models

There are two approaches we are following at the physiological level.

The first is modeling of physiological processes. We address how the response properties of neurons, such as synaptic integration and receptive fields, arise based on experimental observations such as the biophysical properties of single neurons, connectivity in the cortex, and in vivo recordings. An important aspect of a physiological neural model is, besides replicating the data it is based on, to come with predictions for responses, e.g. to novel stimuli, that can be verified or rejected in physiological experiments. This way theory can aid to guide the direction of experiments to gain greater understanding of the brain.

The second approach is to bring neural network ideas from machine learning down to the membrane level, so that phenomena such as Spike-Timing Dependent Plasticity (the most striking phenomenon in synaptic learning) may be understood as information theoretic or probabilistic optimisations. A massive amount of data has accumulated on the molecular basis of neural plasticity. The time is ripe to integrate it in a theoretical framework. If this framework is correct, we will be able to self-organise networks of spiking neurons, facilitating further studies of sensory coding, circuit dynamics, and the function of associative and sensory-motor loops.

At the Redwood Center we apply theoretical ideas at a range of levels of physiological modeling, from single cell models addressing properties of dendritic summation of synaptic input to large network models looking at responses in the primary visual cortex (V1).

Multiscale interactions and oscillations

Brain activity can be described at various levels of complexity. The neuron level, in which single neurons constitute the fundamental computational units, is the most common level for computational theories of sensory perception. However, some theories of plasticity and learning are formulated on the level of individual synapses, and theories of cognitive functions like decision making and attention operate on the level of neuron populations.

Both the neuron level and the population level are directly accessible to electro-physiological measurements. On the neuron level, activity can be recorded using single or multiple electrodes and is best described in terms of the point process of spike timings of individual cells. The activity of many neurons gives rise to population activity which can be measured in the form of local field potentials or activity in the Electro-Corticogram (ECoG) or Electro-Encephalogram (EEG). This population activity is a continuous signal extended in space and time and often has oscillatory properties.

In addition to the study of individual levels of neural activity, it is crucial to understand how different levels interact: We would like to understand how the spiking activity of individual neurons gives rise to population activity, and how in turn the population activity influences the response properties of individual neurons.

There is an intriguing parallel between multiple nested levels of brain activity and the multi-scale structure of sensory data. Is is conceivable that different scales of structure in sensory data are processed not only at different levels of the cortical hierarchy but also at different levels of brain activity as described in this paragraph.

Active perception and sensorimotor loops

Perception is an active process. During natural vision for example, our eyes are constantly moving even while we are fixating an object. Additionally, active brain processes such as attention always influence the processing of sensory information. Action and perception operate in a loop, called the sensorimotor loop.

Some changes of the sensory input are caused by changes in the outside world while other changes are due to our own actions and the brain has to be able to distinguish between these two possibilities. We have seen that our brain is able to extract invariances from sensory data that correspond to objects in the world and in a theory of active perception these are invariances in sensorimotor space rather than in pure sensory space. We are interested in the learning of these so-called sensorimotor contingencies and how they are used during active perception.