VS298 (Fall 06): Neural Computation
Professor: Bruno Olshausen
- Email: baolshausen AT berkeley DOT edu
- Office: 10 Giannini
- Office hours: TBD
GSI: Amir Khosrowshahi
- Email: amirk AT berkeley DOT edu
- Office: 523 Minor, 3-5996
- Office hours: TBD
This is a 3-unit course that provides an introduction to theoretical aspects of neural computation.
- Location: TBD
- Times: Two 1.5 hour lectures per week.
We will have an organizational meeting during the first week of class to determine a good time to meet.
Email list and forum
- Please email the GSI to be added to the class email list.
- A bulletin board is provided here for discussion regarding lecture material, readings, and problem sets.
There will weekly homework assignments and a final project.
Prerequisites are calculus, ordinary differential equations, basic probability and statistics, and linear algebra. Familiarity with programming in a high level language, ideally Matlab, is also required.
- [HKP] Hertz, J. and Krogh, A. and Palmer, R.G. Introduction to the theory of neural computation. Amazon
- [DJCM] MacKay, D.J.C. Information Theory, Inference and Learning Algorithms. available online or Amazon
- [DA] Dayan, P. and Abbott, L.F. Theoretical neuroscience: computational and mathematical modeling of neural systems. Amazon
Additional reading, such as primary source material, will be suggested on a lecture by lecture basis.
- Theory and modeling in neuroscience
- Descriptive vs. functional models
- Turing vs. neural computation
- Reading: HKP chapter 1
Linear neuron models
- Linear systems: vectors, matrices, linear neuron models
- Perceptron model and linear separability
- Reading: HKP chapter 5, DJCM chapters 38-40
- Adaptation in linear neurons
- Widrow-Hoff rule
- Objective functions and gradient descent
- Multilayer networks and backpropagation
- Reading: HKP chapter 6, 7, DJCM chapters 38-40
- Theory of associative reward-penalty
- Models and critics
- Reading: HKP chapter 8, DJCM chapter 36, DA chapter 9
- Linear Hebbian learning and PCA, decorrelation
- Winner-take-all networks and clustering
- Sparse, distributed coding
- Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8
Plasticity and cortical maps
- Self-organizing maps, Kohonen nets
- Models of experience dependent learning and cortical reorganization
- Hopfield networks
- Pattern completion
- Line attractors and `bump circuits’
- Models of associative memory
Probabilistic models and inference
- Probability theory and Bayes’ rule
- Learning and inference in generative models
- The mixture of Gaussians model
- Boltzmann machines
- Sparse coding and ‘ICA’
- Integrate-and-fire model
- Neural encoding and decoding