Mission and Research: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
No edit summary
 
(54 intermediate revisions by 5 users not shown)
Line 1: Line 1:
== Sparse representation ==
__NOEDITSECTION__


<blockquote>
'''Theoretical neuroscience''': a sub-discipline within neuroscience which attempts to use mathematical and physical principles to understand the nature of coding, dynamics, circuitry and plasticity in nervous systems.
</blockquote>


== Hierarchical representation and feedback ==


It is often said that "neuroscience is data-rich yet
theory-poor."  Our aim is to supply useful algorithms and
theoretical ideas to neuroscience in order
* to provide new forms of analysis for neural data (spike trains, EEG, MRI),
* to provide theories and specific models which integrate diverse observations and suggest new experimental approaches.


== Natural scene statistics ==
Specific issues and phenomena we are interested in include hierarchical organization and feedback, plasticity, mechanisms of memory, the roles of spike-timing and oscillations, sparse coding, the computation of the thalamo-cortical system and the cortical microcircuit, and the connection between systems-, cellular- and molecular-level neuroscience.


Methodologically, we use ideas from coding theory and probabilistic machine learning insofar as they relate to known neural phenomena and mechanisms in networks, cells and molecules.


== Invariance ==
Here we provide some brief descriptions of some of the main research themes of the Redwood Center:
 
 
<!-- == Sparse Representation == -->
 
<!-- == Natural scene statistics == -->
 
== Hierarchical organization, feedback, and generative models ==


Our mental experience suggests that the brain encodes and manipulates
Sensory cortex appears to be arranged in a hierarchical fashion, with information flowing from low-level areas, which are closely tied to direct sensory input, to higher-level areas, which are tied more to other cortical areas as opposed to sensory input.  Neurons at lower-level areas tend to have small receptive fields (in terms of the area of the sensory epithelium they integrate over), are tuned to localized features of sensory input, and thus tend to rapidly fluctuate in their activity in response to time-varying sensory input. By contrast, neurons at higher-levels have large receptive fields, are tuned to more global, abstract properties of the sensory world (such as object identity), and are thus more invariant with respect to fluctuations in the raw sensory input. However, the question of what computations underlie this transformation, and what exactly is being represented at various stages of the hierarchy - especially higher-level areas - remains a mystery.
'objects' and their relationships, but there is no neural theory of
how this is done. We recognise, for example, a cup, regardless of its
location, orientation, size and other variations such as lighting and
partial occlusion. How do brain networks see a cup despite these
complicated variations in the image data? How is the invariant part
('cup-ness') encoded separately from the variant part (position etc)?


This is called the invariance problem. It is a 'holy grail' problem of
Another ubiquitous property of cortical organization is the existence of feedback connections between levels of the hierarchy. That is, if a lower area A projects to higher area B, then area B usually projects back to A. However, what role these feedback connections play in information processing, and how they contribute to perception, is not well understood.
the computer vision community, and we aim to tackle it by fortifying
our [[#Learning|learning algorithms]] with insights from the mathematics
surrounding the concept of invariance. Invariance may also be seen in  
[[#Active perception and sensorimotor loops|motor]]
scenarios, cups being a class of things that we can drink from (what
J.J.Gibson called an affordance).  


As we ascend the  
Our goal is to formulate a theoretical framework for hierarchical organization and feedback that takes into account the known neuroanatomy and neurophysiology, and which can provide specific, testable predictions regarding its function.  One avenue we are pursuing that seems particularly promising is based on generative models - i.e., the idea that the cortex contains an internal model of the world, and that it uses this model to infer the causes of sensory input (e.g., objects and their transformations). In this framework, the role of feedback is to carry the predictions of higher levels to lower levels so as to disambiguate representations at early stages of sensory processing.  Perception thus depends on information circulating through cortico-cortical feedback loops in order to arrive at a mutually consistent explanation of sensory input.
[[#Hierarchical representation and feedback|cortical hierarchy]]
from area V1, we find increasingly invariant forms of coding.
It is our goal to understand these forms of coding and how they
may be learned from [[#Natural scene statistics|natural data]].
A modest success in this direction is that 'complex cell' receptive
fields (oriented and localised contrast sensitive neurons which are
invariant to spatial phase) can be learned in this way.


== Learning ==
== Learning ==


Learning is arguably the central problem in theoretical
Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of [[#Sparse representation|representations]], [[#Multiscale interactions and oscillations|network dynamics]] and [[#Hierarchical representation and feedback|circuit function]] may ultimately be best understood through understanding the learning processes that, together with the action of the genome, produce these phenomena.
neuroscience. It is possible that other problems such as the
 
understanding of [[#Sparse representation|representations]],
To solve the problem, it will be necessary to combine the best ideas from statistical machine learning with the cleverest plasticity studies at the synaptic and network level. Utilizing the intersection of these two forms of knowledge greatly constrains the search space. This is necessary since twenty years of abstract neural network theory have done little more than, for example, self-organizing the correct forms for receptive fields in area V1 of cortex, using a single layer feedforward network of `connectionist' neurons, and an ensemble of [[#Natural scene statistics|natural images]].
[[#Multiscale interactions and oscillations|network dynamics]] and
 
[[#Hierarchical representation and feedback|circuit function]] may
Our efforts have focused on unsupervised learning using [[#Sparse representation|sparse coding principles]] and information theory. It is controversial whether learning in the brain is unsupervised or reinforcement-based. Although reinforcement undeniably flows from  subcortical structures, the framework of reinforcement learning requires a hard-coded, or 'given', reward signal that is external to the operation of the network. This is not the case in the brain considered as a whole.
ultimately be best understood through understanding the learning
processes that, together with the action of the genome, produce these
phenomena.


To solve the problem, it will be necessary to combine the best ideas
Breaking the learning problem down, we are focusing on the attempt to learn [[#Invariance|invariant]] forms of coding from [[#Natural scene statistics|sensory stimuli]] (and later, we hope, [[#Active perception and sensorimotor loops|sensorimotor scenarios]]) and on an attempt to explain why changes in neural connection strengths depend on the relative timings of incoming and outgoing [[#Single-cell/network/biophysical models|spikes]].
from statistical machine learning with the cleverest plasticity
studies at the synaptic and network level. Utilising the intersection
of these two forms of knowledge greatly constrains the search
space. This is necessary since twenty years of abstract neural network
theory have done little more than, for example, self-organising the
correct forms for receptive fields in area V1 of cortex, using a
single layer feedforward network of `connectionist' neurons, and
an ensemble of [[#Natural scene statistics|natural images]].


Our efforts have focused on unsupervised learning using
Since the learning algorithms we develop are general multivariate data analysis algorithms, we also use them to analyze (or [[#Exploratory data analysis|data-mine]])
[[#Sparse representation|sparse coding principles]] and
neurophysiological recordings from, for example, EEG, MEG, fMRI and optical imaging techniques, helping neurophysiologists to remove noise and isolate components
information theory. It
of brain activity relevant to sensory, perceptual and motor phenomena.
is controversial whether learning in the brain is unsupervised or
reinforcement-based. Although reinforcement undeniably flows from
subcortical structures, the framework of reinforcement learning
requires a hard-coded, or 'given', reward signal that is external to
the operation of the network. This is not the case in the brain
considered as a whole.


Breaking the learning problem down, we are focusing on the attempt to
== Invariance ==
learn [[#Invariance|invariant]] forms of coding from
[[#Natural scene statistics|sensory stimuli]] (and later, we hope,
[[#Active perception and sensorimotor loops|sensorimotor scenarios]])
and on an attempt to explain why changes in neural connection
strengths depend on the relative timings of incoming and outgoing
[[#Single-cell/network/biophysical models|spikes]].


Since the learning algorithms we develop are general multivariate data
Our mental experience suggests that the brain encodes and manipulates 'objects' and their relationships, but there is no neural theory of how this is done.  We recognize, for example, a cup regardless of its location, orientation, size, or other variations such as lighting and partial occlusion. How do brain networks recognize a cup despite these complicated variations in the image data? How is the invariant part ('cup-ness') encoded separately from the variant part?
analysis algorithms, we also use them to analyse (or
 
[[#Exploratory data analysis|data-mine]])  
This is called the invariance problem. It is a 'holy grail' problem of the computer vision community, and we aim to tackle it by fortifying our [[#Learning|learning algorithms]] with insights from the mathematics surrounding the concept of invariance. Invariance may also be seen in [[#Active perception and sensorimotor loops|motor]] scenarios, cups being a class of things that we can drink from (what J.J.Gibson called an affordance).
neurophysiological recordings from, for example,
 
EEG, MEG, fMRI and optical imaging techniques, helping
As we ascend the [[#Hierarchical organization, feedback, and generative models|cortical hierarchy]] from area V1, we find increasingly invariant forms of coding. It is our goal to understand these forms of coding and how they may be learned from [[#Natural scene statistics|natural data]]. A modest success in this direction is that 'complex cell' receptive fields (oriented and localized contrast sensitive neurons which are invariant to spatial phase) can be learned in this way.
neurophysiologists to remove noise and isolate components
of brain activity relevant to sensory, perceptual and motor
phenomena.


== Associative memory ==
== Associative memory ==


W. James, F. Hayek and D. O. Hebb proposed theories of memory and mental association involving distributed neural representations and synaptic plasticity. Neuronal associative memories are abstract neural networks that implement the basic mechanisms of learning and association as postulated in Hebb's theory. We believe that principles of associative memories are important in tackling central problems in theoretical neuroscience:
* Invariant sensory processing <br /> A recent model of invariant sensory processing demonstrates that a memory-based strategy is applicable to real images (Map seeking circuits. Arathorn, 2002). We are interested in studying the mathematical basis of memory based models of invariant sensory processing.
* Formation of compositional memories <br /> In models of cognition it is crucial that concepts can be compositional and multi-faceted. Holographic representations (Plate; 1994, 2001), spatter coding (Kanerva; 1994) and vector-symbolic architectures (Gayler; 1998) are methods to form compositional distributed representations. All these methods rely critically on associative memories. So far, the existing methods do not scale up to real-world problems.  We are interested in designing models with efficient sparse associative memories that scale up to real-world domains.
* Communication in cortico-cortical networks <br /> The brain is organized in functionally specialized regions connected by a cortico-cortical connections with small network properties (Strogatz). We are interested in extending the theory of associative memory to count the number of possible functional networks in anatomically constrained networks. Potentially, this analysis will reveal important conditions of cortico-cortical information processing, in particular, the properties of distributed representations in local regions that yield high numbers of possible functional networks and thus provide high flexibility in the formation of functional networks.
* Hierarchical memory models <br /> Ultimately, we are interested in designing memory systems that combine the results of the both previous studies, that is, compositional memory representations that can be communicated in a structured neuronal network. In such  networks hierarchies will be defined by the degree of convergence from different modalities (regions with different functional specialization).
* Unsupervised/supervised learning of memories (role of neuromodulatory systems)<br /> In standard models of associative memory, the selection of memories to be stored is done externally, in a supervised fashion.  We are interested in models that screen continuous input and decide internally what memories to store.


== Exploratory data analysis ==
== Exploratory data analysis ==
Brain imaging techniques, such as functional MRI, EEG, open macroscopic windows on processes in the working brain. These methods yield high dimensional data sets that are organized in space (brain coordinates) and time. The current analysis methods extract interpretable images from the data but these methods are far from harvesting the full richness of the measured data. We are interested in developing exploratory analysis methods to assess the statistical properties in the joint data set combining imaging data and behavior/stimulus data.


 
== Single-cell, network, and biophysical models ==
== Single-cell/network/biophysical models ==


There are two approaches we are following at the physiological level.
There are two approaches we are following at the physiological level.
Line 95: Line 72:
The first is modeling of physiological processes. We address how the response properties of neurons, such as synaptic integration and receptive fields, arise based on experimental observations such as the biophysical properties of single neurons, connectivity in the cortex, and ''in vivo'' recordings. An important aspect of a physiological neural model is, besides replicating the data it is based on, to come with predictions for responses, e.g. to novel stimuli, that can be verified or rejected in physiological experiments. This way theory can aid to guide the direction of experiments to gain greater understanding of the brain.
The first is modeling of physiological processes. We address how the response properties of neurons, such as synaptic integration and receptive fields, arise based on experimental observations such as the biophysical properties of single neurons, connectivity in the cortex, and ''in vivo'' recordings. An important aspect of a physiological neural model is, besides replicating the data it is based on, to come with predictions for responses, e.g. to novel stimuli, that can be verified or rejected in physiological experiments. This way theory can aid to guide the direction of experiments to gain greater understanding of the brain.


The second approach is to bring neural network ideas from machine learning down to the membrane level, so that phenomena such as Spike-Timing Dependent Plasticity (the most striking phenomenon in synaptic learning) may be understood as information theoretic or probabilistic optimisations. A massive amount of data has accumulated on the molecular basis of neural plasticity. The time is ripe to integrate it in a theoretical framework. If this framework is correct, we will be able to self-organise networks of spiking neurons, facilitating further studies of sensory coding, circuit dynamics, and the function of associative and sensory-motor loops.
The second approach is to bring neural network ideas from machine learning down to the membrane level, so that phenomena such as Spike-Timing Dependent Plasticity (the most striking phenomenon in synaptic learning) may be understood as information theoretic or probabilistic optimizations. A massive amount of data has accumulated on the molecular basis of neural plasticity. The time is ripe to integrate it in a theoretical framework. If this framework is correct, we will be able to self-organize networks of spiking neurons, facilitating further studies of sensory coding, circuit dynamics, and the function of associative and sensory-motor loops.


At the Redwood Center we apply theoretical ideas at a range of levels of physiological modeling, from single cell models addressing properties of dendritic summation of synaptic input to large network models looking at responses in the primary visual cortex (V1).
At the Redwood Center we apply theoretical ideas at a range of levels of physiological modeling, from single cell models addressing properties of dendritic summation of synaptic input to large network models looking at responses in the primary visual cortex (V1).
Line 101: Line 78:
== Multiscale interactions and oscillations ==
== Multiscale interactions and oscillations ==


[[Brain]] activity can be described at various levels of complexity. The neuron level, in which single neurons constitute the fundamental computational units, is the most common level for computational theories of sensory perception. However, some theories of plasticity and learning are formulated on the level of individual synapses, and theories of cognitive functions like decision making and attention operate on the level of neuron populations.
Brain activity can be described at various levels of resolution. The neuron level, on which single neurons constitute the fundamental computational units, is the most common level for theories of sensory perception. However, some theories of plasticity and learning are formulated on the level of individual synapses. Further, theories of cognitive functions like decision making and attention operate on the level of neuron populations.


Both the neuron level and the population level are directly accessible to electro-physiological measurements. On the neuron level, activity can be recorded using single or multiple electrodes and is best described in terms of the point process of spike timings of individual cells. The activity of many neurons gives rise to population activity which can be measured in the form of local field potentials or activity in the Electro-Corticogram (ECoG) or Electro-Encephalogram (EEG). This population activity is a continuous signal extended in space and time and often has oscillatory properties.
Both, the neuron level and the population level are directly accessible to electro-physiological measurements. The activity in single neurons can be recorded with single or multiple electrodes. The activity of single neurons is best described as point processes in time that correspond to the spikes of individual cells. The population activity of many neurons can be recorded as local field potentials or as activity in the Electro-Corticogram (ECoG) or in the Electro-Encephalogram (EEG). The population activity is a continuous signal extended in space and time and often has oscillatory properties.


In addition to the study of individual levels of neural activity, it is crucial to understand how different levels interact: We would like to understand how the spiking activity of individual neurons gives rise to population activity, and how in turn the population activity influences the response properties of individual neurons.
In addition to studies that investigate neural activity on an individual level, it is crucial to understand how different levels interact: We would like to understand how spike activity of individual neurons forms population activity, and how in turn the population activity influences the response properties of individual neurons.


There is an intriguing parallel between multiple nested levels of brain activity and the multi-scale structure of sensory data. Is is conceivable that different scales of structure in sensory data are processed not only at different levels of the [[#Hierarchical_representation_and_feedback|cortical hierarchy]] but also at different levels of brain activity as described in this paragraph.
There is an intriguing parallel between multiple nested levels of brain activity and the multi-scale structure of sensory data. It is conceivable that different structural scales in sensory data are processed not only at different levels of the [[#Hierarchical organization, feedback, and generative models|cortical hierarchy]] but also at different levels of brain activity.


== Active perception and sensorimotor loops ==
== Active perception and sensorimotor loops ==


Perception is an active process. During natural vision for example, our eyes are constantly moving even while we are fixating an object. Additionally, active brain processes such as attention always influence the processing of sensory information. Action and perception operate in a loop, called the sensorimotor loop.
Perception is an active process. During natural vision, for example, our eyes are constantly moving even when we fixate an object. In addition, active, internal processes in the brain, such as attention, influence the processing of sensory information. Thus, action and perception affect each other tightly, which is often called the sensorimotor loop.
 
If perception and action are coupled, the brain must learn to distinguish whether a change in a sensory input reflects a change in the outside world or is a result of our own action. Our brains are able to extract [[#Invariance|invariances]] from sensory data that correspond to objects in the world. Under a theory of active perception, these are invariances in sensorimotor space rather than in pure sensory space. We are interested in the [[#Learning|learning]] of sensorimotor contingencies and how they are used during active perception.
 
----------
A Serbo-Croatian  translation of this page may be found [http://science.webhostinggeeks.com/misija-i-istrazivanja here] thanks to [mailto:jovanam@webhostinggeeks.com Jovana Milutinovich].
 
A Slovenian translation of this page can be found [http://melleum.com/blog/poslanstvo-in-raziskave/ here] thanks to Gasper Halipovich.


Some changes of the sensory input are caused by changes in the outside world while other changes are due to our own actions and the brain has to be able to distinguish between these two possibilities. We have seen that our brain is able to extract [[#Invariance|invariances]] from sensory data that correspond to objects in the world and in a theory of active perception these are invariances in sensorimotor space rather than in pure sensory space. We are interested in the [[#Learning|learning]] of these so-called sensorimotor contingencies and how they are used during active perception.
A German translation of this page may be found [http://www.autoteilexxl.de/edu/?p=3179 here] thanks to Daniela Milton.

Latest revision as of 05:49, 3 May 2015


Theoretical neuroscience: a sub-discipline within neuroscience which attempts to use mathematical and physical principles to understand the nature of coding, dynamics, circuitry and plasticity in nervous systems.


It is often said that "neuroscience is data-rich yet theory-poor." Our aim is to supply useful algorithms and theoretical ideas to neuroscience in order

  • to provide new forms of analysis for neural data (spike trains, EEG, MRI),
  • to provide theories and specific models which integrate diverse observations and suggest new experimental approaches.

Specific issues and phenomena we are interested in include hierarchical organization and feedback, plasticity, mechanisms of memory, the roles of spike-timing and oscillations, sparse coding, the computation of the thalamo-cortical system and the cortical microcircuit, and the connection between systems-, cellular- and molecular-level neuroscience.

Methodologically, we use ideas from coding theory and probabilistic machine learning insofar as they relate to known neural phenomena and mechanisms in networks, cells and molecules.

Here we provide some brief descriptions of some of the main research themes of the Redwood Center:



Hierarchical organization, feedback, and generative models

Sensory cortex appears to be arranged in a hierarchical fashion, with information flowing from low-level areas, which are closely tied to direct sensory input, to higher-level areas, which are tied more to other cortical areas as opposed to sensory input. Neurons at lower-level areas tend to have small receptive fields (in terms of the area of the sensory epithelium they integrate over), are tuned to localized features of sensory input, and thus tend to rapidly fluctuate in their activity in response to time-varying sensory input. By contrast, neurons at higher-levels have large receptive fields, are tuned to more global, abstract properties of the sensory world (such as object identity), and are thus more invariant with respect to fluctuations in the raw sensory input. However, the question of what computations underlie this transformation, and what exactly is being represented at various stages of the hierarchy - especially higher-level areas - remains a mystery.

Another ubiquitous property of cortical organization is the existence of feedback connections between levels of the hierarchy. That is, if a lower area A projects to higher area B, then area B usually projects back to A. However, what role these feedback connections play in information processing, and how they contribute to perception, is not well understood.

Our goal is to formulate a theoretical framework for hierarchical organization and feedback that takes into account the known neuroanatomy and neurophysiology, and which can provide specific, testable predictions regarding its function. One avenue we are pursuing that seems particularly promising is based on generative models - i.e., the idea that the cortex contains an internal model of the world, and that it uses this model to infer the causes of sensory input (e.g., objects and their transformations). In this framework, the role of feedback is to carry the predictions of higher levels to lower levels so as to disambiguate representations at early stages of sensory processing. Perception thus depends on information circulating through cortico-cortical feedback loops in order to arrive at a mutually consistent explanation of sensory input.

Learning

Learning is arguably the central problem in theoretical neuroscience. It is possible that other problems such as the understanding of representations, network dynamics and circuit function may ultimately be best understood through understanding the learning processes that, together with the action of the genome, produce these phenomena.

To solve the problem, it will be necessary to combine the best ideas from statistical machine learning with the cleverest plasticity studies at the synaptic and network level. Utilizing the intersection of these two forms of knowledge greatly constrains the search space. This is necessary since twenty years of abstract neural network theory have done little more than, for example, self-organizing the correct forms for receptive fields in area V1 of cortex, using a single layer feedforward network of `connectionist' neurons, and an ensemble of natural images.

Our efforts have focused on unsupervised learning using sparse coding principles and information theory. It is controversial whether learning in the brain is unsupervised or reinforcement-based. Although reinforcement undeniably flows from subcortical structures, the framework of reinforcement learning requires a hard-coded, or 'given', reward signal that is external to the operation of the network. This is not the case in the brain considered as a whole.

Breaking the learning problem down, we are focusing on the attempt to learn invariant forms of coding from sensory stimuli (and later, we hope, sensorimotor scenarios) and on an attempt to explain why changes in neural connection strengths depend on the relative timings of incoming and outgoing spikes.

Since the learning algorithms we develop are general multivariate data analysis algorithms, we also use them to analyze (or data-mine) neurophysiological recordings from, for example, EEG, MEG, fMRI and optical imaging techniques, helping neurophysiologists to remove noise and isolate components of brain activity relevant to sensory, perceptual and motor phenomena.

Invariance

Our mental experience suggests that the brain encodes and manipulates 'objects' and their relationships, but there is no neural theory of how this is done. We recognize, for example, a cup regardless of its location, orientation, size, or other variations such as lighting and partial occlusion. How do brain networks recognize a cup despite these complicated variations in the image data? How is the invariant part ('cup-ness') encoded separately from the variant part?

This is called the invariance problem. It is a 'holy grail' problem of the computer vision community, and we aim to tackle it by fortifying our learning algorithms with insights from the mathematics surrounding the concept of invariance. Invariance may also be seen in motor scenarios, cups being a class of things that we can drink from (what J.J.Gibson called an affordance).

As we ascend the cortical hierarchy from area V1, we find increasingly invariant forms of coding. It is our goal to understand these forms of coding and how they may be learned from natural data. A modest success in this direction is that 'complex cell' receptive fields (oriented and localized contrast sensitive neurons which are invariant to spatial phase) can be learned in this way.

Associative memory

W. James, F. Hayek and D. O. Hebb proposed theories of memory and mental association involving distributed neural representations and synaptic plasticity. Neuronal associative memories are abstract neural networks that implement the basic mechanisms of learning and association as postulated in Hebb's theory. We believe that principles of associative memories are important in tackling central problems in theoretical neuroscience:

  • Invariant sensory processing
    A recent model of invariant sensory processing demonstrates that a memory-based strategy is applicable to real images (Map seeking circuits. Arathorn, 2002). We are interested in studying the mathematical basis of memory based models of invariant sensory processing.
  • Formation of compositional memories
    In models of cognition it is crucial that concepts can be compositional and multi-faceted. Holographic representations (Plate; 1994, 2001), spatter coding (Kanerva; 1994) and vector-symbolic architectures (Gayler; 1998) are methods to form compositional distributed representations. All these methods rely critically on associative memories. So far, the existing methods do not scale up to real-world problems. We are interested in designing models with efficient sparse associative memories that scale up to real-world domains.
  • Communication in cortico-cortical networks
    The brain is organized in functionally specialized regions connected by a cortico-cortical connections with small network properties (Strogatz). We are interested in extending the theory of associative memory to count the number of possible functional networks in anatomically constrained networks. Potentially, this analysis will reveal important conditions of cortico-cortical information processing, in particular, the properties of distributed representations in local regions that yield high numbers of possible functional networks and thus provide high flexibility in the formation of functional networks.
  • Hierarchical memory models
    Ultimately, we are interested in designing memory systems that combine the results of the both previous studies, that is, compositional memory representations that can be communicated in a structured neuronal network. In such networks hierarchies will be defined by the degree of convergence from different modalities (regions with different functional specialization).
  • Unsupervised/supervised learning of memories (role of neuromodulatory systems)
    In standard models of associative memory, the selection of memories to be stored is done externally, in a supervised fashion. We are interested in models that screen continuous input and decide internally what memories to store.

Exploratory data analysis

Brain imaging techniques, such as functional MRI, EEG, open macroscopic windows on processes in the working brain. These methods yield high dimensional data sets that are organized in space (brain coordinates) and time. The current analysis methods extract interpretable images from the data but these methods are far from harvesting the full richness of the measured data. We are interested in developing exploratory analysis methods to assess the statistical properties in the joint data set combining imaging data and behavior/stimulus data.

Single-cell, network, and biophysical models

There are two approaches we are following at the physiological level.

The first is modeling of physiological processes. We address how the response properties of neurons, such as synaptic integration and receptive fields, arise based on experimental observations such as the biophysical properties of single neurons, connectivity in the cortex, and in vivo recordings. An important aspect of a physiological neural model is, besides replicating the data it is based on, to come with predictions for responses, e.g. to novel stimuli, that can be verified or rejected in physiological experiments. This way theory can aid to guide the direction of experiments to gain greater understanding of the brain.

The second approach is to bring neural network ideas from machine learning down to the membrane level, so that phenomena such as Spike-Timing Dependent Plasticity (the most striking phenomenon in synaptic learning) may be understood as information theoretic or probabilistic optimizations. A massive amount of data has accumulated on the molecular basis of neural plasticity. The time is ripe to integrate it in a theoretical framework. If this framework is correct, we will be able to self-organize networks of spiking neurons, facilitating further studies of sensory coding, circuit dynamics, and the function of associative and sensory-motor loops.

At the Redwood Center we apply theoretical ideas at a range of levels of physiological modeling, from single cell models addressing properties of dendritic summation of synaptic input to large network models looking at responses in the primary visual cortex (V1).

Multiscale interactions and oscillations

Brain activity can be described at various levels of resolution. The neuron level, on which single neurons constitute the fundamental computational units, is the most common level for theories of sensory perception. However, some theories of plasticity and learning are formulated on the level of individual synapses. Further, theories of cognitive functions like decision making and attention operate on the level of neuron populations.

Both, the neuron level and the population level are directly accessible to electro-physiological measurements. The activity in single neurons can be recorded with single or multiple electrodes. The activity of single neurons is best described as point processes in time that correspond to the spikes of individual cells. The population activity of many neurons can be recorded as local field potentials or as activity in the Electro-Corticogram (ECoG) or in the Electro-Encephalogram (EEG). The population activity is a continuous signal extended in space and time and often has oscillatory properties.

In addition to studies that investigate neural activity on an individual level, it is crucial to understand how different levels interact: We would like to understand how spike activity of individual neurons forms population activity, and how in turn the population activity influences the response properties of individual neurons.

There is an intriguing parallel between multiple nested levels of brain activity and the multi-scale structure of sensory data. It is conceivable that different structural scales in sensory data are processed not only at different levels of the cortical hierarchy but also at different levels of brain activity.

Active perception and sensorimotor loops

Perception is an active process. During natural vision, for example, our eyes are constantly moving even when we fixate an object. In addition, active, internal processes in the brain, such as attention, influence the processing of sensory information. Thus, action and perception affect each other tightly, which is often called the sensorimotor loop.

If perception and action are coupled, the brain must learn to distinguish whether a change in a sensory input reflects a change in the outside world or is a result of our own action. Our brains are able to extract invariances from sensory data that correspond to objects in the world. Under a theory of active perception, these are invariances in sensorimotor space rather than in pure sensory space. We are interested in the learning of sensorimotor contingencies and how they are used during active perception.


A Serbo-Croatian translation of this page may be found here thanks to Jovana Milutinovich.

A Slovenian translation of this page can be found here thanks to Gasper Halipovich.

A German translation of this page may be found here thanks to Daniela Milton.