Synopsis of Research

From RedwoodCenter
Jump to navigationJump to search

Computational models of the brain

Behavior can be linked to computations in the brain and studying computational models of the brain can reveal the basic types of computation possible in nerve tissue. A computational model of the brain is a mathematical description of the brain that relates state changes in the brain to computation. Thus, a computational model of the brain consists of two components, first a description of the dynamics of brain states, such as neural activity patterns, synaptic states, etc., and second a representational scheme how brain states relate to behavioral entities, such as sensory input or memories.

Examples of computational brain models are abstract neural networks. They describe the dynamics of neural and synaptic states and relate the states in the system to computation, neural states represent computational operands and synaptic states correspond to computing operations. However, abstract neural networks are crude models of the actual biophysics of the brain. This shortcoming of abstract neural networks illustrates the dilemma of computational models of the brain, the dilemma between simplicity and richness. In order to rule out beforehand as little as possible, a computational brain model should reflect the experimental findings as detailed as possible. On the other hand a computational theory of a behavior is basically an algorithm and the cleanest form to instantiate such a hypothesis is by the simplest neural network that can do the job efficiently and is not incompatible with neurobiology (following Occam's razor principle).

Models of associative memory

Obviously, no single computational brain model can escape the richness/simplicity dilemma. The way I study associative memory function of the brain is to investigate chains of models that vary in the faithfulness of the biophysical description. The starting point of the chain is an abstract neural network model corresponding to the hypothetical function. Features can be added to the abstract model step by step, reflecting neurobiological features. Thus the computational function can be first analyzed in the abstract model. The predictions of biophysical brain properties arising from the functional hypothesis can be assessed in the more detailed models. Qualitative changes in the model behavior induced by certain model features can be easily traced in the chain of models.

Neuronal associative memories are abstract neural networks that implement the basic mechanisms of learning and association as postulated in Hebb's theory (Hebb 1949, Hayek 1954, see also James 1892). Neural associative memories have been proposed as computational models for local strongly connected cortical circuits (Palm 1982, Hopfield 1982, Amit 1989). The computational function is the storage and error-tolerant recall of distributed activity patterns. The memory recall is called associative pattern completion if it involves the completion of a noisy pattern according involving memory. Another recall variant possible in associative memories is pattern recognition (Palm & Sommer 1992) where inputs are just classified as "known" or "unknown".

A variety of different abstract models of associative memory has been proposed in the literature that could all serve as starting point in a chain of computational models for memory in the brain. My choice of an abstract model of associative memory relies on the observation that nature often finds efficient solutions.

  • How to measure the efficiency of associative memories?

Information capacity, that is, the amount of information that can be stored, has become the standard measure for the efficiency of associative memories. However, the traditional measures of capacity do not take into account all relevant flows of information during learning and retrieval. In particular, they neglect the loss due to retrieval errors as well as the information contained in the noisy patterns during pattern completion tasks. For definitions of information capacity that take into account all these factors see (Sommer 1993, Palm & Sommer 1996).

  • Should memory representations be dense or sparse?

An argument for sparse memory representations in the brain follows from the analysis of synaptic learning rules in associative memory. The learning rules that are particularly relevant for the brain are local learning rules, that is, rules of synaptic plasticity that only depend on pre- and post-synaptic activity. Elisabeth Gardner (1988) found that local learning rules store sparse memory patterns more efficiently than nonsparse patterns and that for sparse patterns local learning cannot be outperformed by nonlocal learning. Thus, sparse memory representations arise from the optimal use of local synaptic learning, a property of synaptic plasticity well confirmed in physiological studies. Gardner's analysis allows this deep fundamental insight, however, it is not constructive, for instance, it only takes into account the learning process and not the recall process. Thus, the question remained:

  • What concrete associative memory models process sparse memory representations efficiently?

A general analysis of local learning rules -assessing the capacity of storage and retrieval in a pattern association task- is described in (Sommer 1993; Palm & Sommer 1996). For sparse memory patterns, the analysis characterizes the class of efficient local learning rules. How different superposition schemes for memory traces (in particular, linear superposition as in the Hopfield model and clipped superposition as in the Willshaw model) compare in terms of efficiency in sparse pattern recognition is analyzed in (Palm & Sommer 1992; Sommer 1993).

The analyses of sparse associative memory indicate that the classical Willshaw-Steinbuch model (Steinbuch, 1961; Willshaw et al, 1969) is among the most efficient models. However, (Palm & Sommer 1992; Sommer 1993) show for this model that the learning provides a higher capacity than the retrieval, i.e., the retrieval in the original model is an information bottleneck. This result raises the question whether the Willshaw model can be improved by modified retrieval.

The autoassociative Willshaw model with iterative retrieval was analyzed in (Sommer 1993; Schwenker, Sommer and Palm 1996). It is shown that the modified retrieval retains the asymptotic information capacity of the original model. However, for (large) finite-sized networks iterative retrieval has the following advantages: 1) A significant increase in recall precision. 2) The asymptotic capacity value can be reached in networks of already moderate sizes -- the original model does not reach asymptotic performance at practical network sizes. 3) Iterative retrieval is fast. The typical number of required iteration steps is low (<4).

In bidirectional associative memories (Kosko 1988) with sparse patterns, naive iterative retrieval does not provide the same improvement as for autoassociation. (Sommer & Palm 1998, Sommer & Palm 1999) explain why and suggest a novel and very efficient iterative retrieval in bidirectional associative memories, called crosswise bidirectional retrieval (see also below).

Having identified efficient instances of sparse associative memory models these can be used in models of neuronal circuits of the brain.

  • What are the properties of cell assemblies?

If Hebb's theory were true and brain function would be based on cell assemblies, what would their properties be, i.e., how many cells do typically form an assembly and how many assemblies "fit" in a local circuit of cortical tissue? (Sommer 2000) analyzes a model of a square millimeter of cortex (number of neurons and connection densities were taken from neuroanatomical studies, cell excitability was estimated based on physiological studies). The study reveals that the local synapses are used most efficiently if the size of the assemblies is a few hundred cells and the number of assemblies is in the range between ten and sixty thousand. Due to the incomplete connectivity in the network there arises an interesting extension in functionality: A small set of assemblies (~5) can be recalled simultaneously and not just a single one as in classical associative memories.

  • How is associative memory reflected in temporal structure of neural activity?

Simulation studies with associative networks of conduction-based spiking neurons (two-compartment neurons a la Pinsky & Rinzel, 1994) are described in (Sommer & Wennekers 2000, Sommer & Wennekers 2001). It is revealed that associative memory recall can be completed extremely fast, that is, in 25-60ms. Gamma-oscillations can indicate iterative recall (that reaches higher retrieval precision) with latencies of 60-260ms.

Models of sparse coding of visual input

see Rehn & Sommer, 2005 and Rehn & Sommer, 2006

Memory-based models of cognition

see Sommer & Kanverva (2006).

Organization of meso- and macroscopic activity patterns in the brain

While neural network models described in the previous section help understanding computations of local brain circuits, cognitive functions ultimately rely on the meso- and macroscopic organization of neural activity in the brain. The studies in this section address how macroscopic activity flow can establish cooperative interactions even between remote brain regions.

  • Can large cell assemblies be integrated by cortico-cortical projections?

Reciprocal connectivity is the most common type of cortico-cortical projections reported by neuroanatomical tracer studies. Thus it is likely that reciprocal connections play an important role in large-scale integration of neural representations or cell assemblies. (Sommer & Wennekers 2003) lay out how bidirectional association in reciprocal projections could provide such an integration and how this ties into earlier work about distributed representations, such as the theories of Wickelgren, Edelman, Damasio, Mesulam and others.

Macroscopically distributed cell assemblies would easily form, if already a single reciprocal connection would express associative memory function. In (Sommer & Wennekers, 2000) a bidirectional associative memory model with conductance-based neurons is investigated that, in fact, performs efficiently. A more abstract model that is very robust with respect to cross talk --and therefore might be a good computational model of a cortico-cortical projection-- is proposed in (Sommer & Palm 1998, Sommer, Wennekers & Palm 1998, Sommer & Palm1999).

  • What causes coherent oscillations in distant brain regions? Does it require learning?

In recordings of neuronal activity, coherent oscillations mostly occur in phase, even if the recording sites in cortex are far apart of each other. For fast (gamma range) oscillations this finding is puzzling given the large delay times reported in long-range projections. Modeling studies using reciprocal excitatory couplings with such delay times predict anti-phase rather than in-phase correlation. In (Knoblauch & Sommer 2002, Knoblauch & Sommer 2003) the conditions are studied under which reciprocal cortical connections with realistic delays can express coherent gamma oscillations. It is demonstrated that learning based on spike-timing dependent synaptic plasticity (Markram et al. 1997, Poo et al. 1998) can provide robust zero lag coherence over long-range projections -- zero-lag links.

  • How can macroscopic activity patterns form through cortico-cortical connections?

Neuronography experiments (MCulloch et al, Pribham et al) revealed that epileptiform activity elicited by local application of strychnine entails persistent patterns of activity involving the activity of many brain areas. (Sommer & Koetter 1997, Koetter & Sommer 2000) investigates in a computer model the relation between the anatomy of cortico-cortical projections and the expression of persistent macroscopic activity patterns. In the model the connection weights between brain areas can be either simple cortex connectivity schemes such as nearest neighbor connections or data about cortico-cortical projections gathered by neuroanatomical tracer studies and collatedin the CoCoMac database. The comparison between different connectivity schemes shows that neuroanatomical data can best explain the measured activity patterns. It is concluded that long-range connections are crucial in the formation of patterns that have been observed experimentally. Furthermore, the simulations indicate multisynaptic reverberating activity propagation and clearly rule out the hypothesis that just monosynaptic spread would produce the patterns -- as was speculated in the experimental literature. (V. Schmitt et al 2003) investigates the influence of thalamocortical connections in a similar model.

  • How to reveal organization of neuronal activity by Neuroimaging?

Imaging methods like positron emission tomography (PET) and functional magnetic resonance (fMRI) provide the first (albeit indirect) windows to macroscopic activation patterns in the working brain. The spatio-temporal data sets provided by this methods are usually searched for functional activity using regression analysis based on temporal shapes that are estimated based on the timing in the experimental paradigm. However, in short-lasting events and in most cognitive tasks the temporal shape cannot be reliably predicted. In these cases the detection of functional activity requires analysis methods based on weaker assumptions about the signal course. (Baune et al. 1999) describes a new cluster analysis method for detecting regions of fMRI activation. The method requires no information about the time course of the activation and is applied to detect timing differences in the activation of supplementary motor cortex and motor cortex during a voluntary movement task.

(Wichert et al. 2003) describes the extension of the method of Baune et al. for event-related designs. A new method of experimental design/data processing is proposed that yields volumes of data where all slices are perfectly timed. This avoids the artifacts introduced by usual data preprocessing methods based on phase-shifting. In (Wichert et al. 2003) the exploratory method is applied to reveal functional activity during a n-back working memory task.

In (Baune et al., 2001, Ruckgaber et al.., 2001) a cluster analysis method was developed to detect microgilia activation which is a very sensitive indicator for brain lesions. Mathematical analysis of associative memories

Theory of neural associative memory

  • Bayesian theory of associative memory

An attempt to tame the zoo of associative memory models proposed in the literature is the Bayesian theory of associative memory described in (Sommer & Dayan 1998). In this theory the optimal retrieval dynamics can be derived from the uncertainties about the input pattern and the synaptic weights. Our analysis explains the success of many model modifications proposed on heuristic basics, for instance, addition of a ferromagnetic term, of site-dependent thresholds, diagonal terms, various threshold strategies, etc.

  • Combinatorial analysis of the Willshaw model

The full combinatorial analysis of the finite Willshaw model can be found in (Sommer & Palm 1999). It predicts distributions of the dendritic potentials and retrieval errors for arbitrary network sizes and all possible types of input noise.

  • Signal-to-noise analysis of local synaptic learning rules

A general signal-to-noise analysis of local learning rules is given in (Sommer 1993; Palm & Sommer 1996). The final result is basically one formula, equation (3.23) in (Palm & Sommer 1996) calculating the S/N for arbitrary learning rules, sparseness levels and input errors. These papers also contain the full information-theoretical treatment of learning and retrieval in associative memories that lead to new definitions of information capacity.

  • Asymptotic analysis of sparse Hopfield- and Willshaw-models performing pattern recognition

The asymptotic analysis of the sparse Hopfield and Willshaw model is provided in (Palm & Sommer 1992). We use elementary analysis information theory and can avoid the cumbersome Replica trick used in the earlier analysis of the Hopfield model (Tsodyks & Feigelman, 1988).