VS298 (Fall 06): Neural Computation: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
(21 intermediate revisions by the same user not shown)
Line 3: Line 3:
'''Professor:''' [http://redwood.berkeley.edu/bruno Bruno Olshausen]
'''Professor:''' [http://redwood.berkeley.edu/bruno Bruno Olshausen]
* Email: baolshausen AT berkeley DOT edu
* Email: baolshausen AT berkeley DOT edu
* Office: 10 Giannini
* Office: 10 Giannini, 3-1472
* Office hours: TBD
* Office hours: TBD


Line 20: Line 20:
*Times: Two 1.5 hour lectures per week.  
*Times: Two 1.5 hour lectures per week.  


We will have an organizational meeting during the first week of class to determine a good time to meet.
There will be an <font color="red">'''organizational meeting'''</font> at 4pm, Tuesday, August 29th, at the Redwood Center in 10 Giannini (red dot on this  [http://redwood.berkeley.edu/wiki/Image:10_Giannini.jpeg map]). Class will begin the week of September 4th.


==== Email list and forum ====
==== Email list and forum ====
Line 33: Line 33:


==== Textbooks ====
==== Textbooks ====
* ['''HKP'''] Hertz, J. and Krogh, A. and Palmer, R.G. Introduction to the theory of neural computation. [http://www.amazon.com/gp/product/0201515601/sr=8-1/qid=1155770696/ref=sr_1_1/103-4924722-3267011?ie=UTF8 Amazon]
* ['''HKP'''] Hertz, J. and Krogh, A. and Palmer, R.G. ''Introduction to the theory of neural computation.'' [http://www.amazon.com/gp/product/0201515601/sr=8-1/qid=1155770696/ref=sr_1_1/103-4924722-3267011?ie=UTF8 Amazon]
* ['''DJCM'''] MacKay, D.J.C. Information Theory, Inference and Learning Algorithms. [http://www.inference.phy.cam.ac.uk/mackay/itila/book.html available online] or [http://www.amazon.com/exec/obidos/redirect?tag=davidmackayca-20&path=tg/detail/-/0521642981/qid%3D1057850920/sr%3D1-4 Amazon]
* ['''DJCM'''] MacKay, D.J.C. ''Information Theory, Inference and Learning Algorithms.'' [http://www.inference.phy.cam.ac.uk/mackay/itila/book.html Available online] or [http://www.amazon.com/exec/obidos/redirect?tag=davidmackayca-20&path=tg/detail/-/0521642981/qid%3D1057850920/sr%3D1-4 Amazon]
* ['''DA'''] Dayan, P. and Abbott, L.F. Theoretical neuroscience: computational and mathematical modeling of neural systems. [http://www.amazon.com/gp/product/0262541858/ref=sr_11_1/103-4924722-3267011?ie=UTF8 Amazon]
* ['''DA'''] Dayan, P. and Abbott, L.F. ''Theoretical neuroscience: computational and mathematical modeling of neural systems.'' [http://www.amazon.com/gp/product/0262541858/ref=sr_11_1/103-4924722-3267011?ie=UTF8 Amazon]


Additional reading, such as primary source material, will be suggested on a lecture by lecture basis.
Additional reading, such as primary source material, will be suggested on a lecture by lecture basis.
== [http://redwood.berkeley.edu/wiki/VS298:_Additional_resources Additional resources] ==
== [http://redwood.berkeley.edu/wiki/VS298:_Homework_assignments Homework] ==


== Syllabus ==
== Syllabus ==
Line 51: Line 54:
# Linear systems: vectors, matrices, linear neuron models
# Linear systems: vectors, matrices, linear neuron models
# Perceptron model and linear separability
# Perceptron model and linear separability
* '''Reading:''' '''HKP''' chapter 5, '''DJCM''' chapters 38-40
* '''Reading''': '''HKP''' chapter 5, '''DJCM''' chapters 38-40


==== Supervised learning ====
==== Supervised learning ====
Line 59: Line 62:
# Objective functions and gradient descent
# Objective functions and gradient descent
# Multilayer networks and backpropagation
# Multilayer networks and backpropagation
* '''Reading:''' '''HKP''' chapter 6, 7, '''DJCM''' chapters 38-40
* '''Reading''': '''HKP''' chapter 6, 7, '''DJCM''' chapters 38-40, 44


==== Reinforcement learning ====
==== Reinforcement learning ====
Line 65: Line 68:
# Theory of associative reward-penalty
# Theory of associative reward-penalty
# Models and critics
# Models and critics
* '''Reading:''' '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 9
* '''Reading''': '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 9


==== Unsupervised learning ====
==== Unsupervised learning ====
Line 72: Line 75:
# Winner-take-all networks and clustering
# Winner-take-all networks and clustering
# Sparse, distributed coding
# Sparse, distributed coding
* '''Reading:''' '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 8
* '''Reading''': '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 8, 10


==== Plasticity and cortical maps ====
==== Plasticity and cortical maps ====
Line 78: Line 81:
# Self-organizing maps, Kohonen nets
# Self-organizing maps, Kohonen nets
# Models of experience dependent learning and cortical reorganization
# Models of experience dependent learning and cortical reorganization
* '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8


==== Recurrent networks ====
==== Recurrent networks ====
Line 85: Line 89:
# Line attractors and `bump circuits’
# Line attractors and `bump circuits’
# Models of associative memory
# Models of associative memory
* '''Reading''': '''HKP''' chapter 2-3, '''DJCM''' chapter 42, '''DA''' chapter 7


==== Probabilistic models and inference ====
==== Probabilistic models and inference ====
Line 93: Line 98:
# Boltzmann machines
# Boltzmann machines
# Sparse coding and ‘ICA’
# Sparse coding and ‘ICA’
* '''Reading''': '''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10


==== Neural implementations ====
==== Neural implementations ====
Line 99: Line 105:
# Neural encoding and decoding
# Neural encoding and decoding
# Limits of precision in neurons
# Limits of precision in neurons
* '''Reading''': '''DA''' chapter 1-4, 5.4

Revision as of 05:29, 22 August 2006

People

Professor: Bruno Olshausen

  • Email: baolshausen AT berkeley DOT edu
  • Office: 10 Giannini, 3-1472
  • Office hours: TBD

GSI: Amir Khosrowshahi

  • Email: amirk AT berkeley DOT edu
  • Office: 523 Minor, 3-5996
  • Office hours: TBD

Course description

This is a 3-unit course that provides an introduction to the theory of neural computation. The goal is to familiarize students with the major theoretical frameworks and models used in neuroscience and psychology, and to provide hands-on experience in using these models.

This course differs from MCB 262, Advanced Topics in Systems Neuroscience, in that it emphasizes the theoretical underpinnings of models - i.e., their mathematical and computational properties - rather than their application to the analysis of neuroscientific data. It will be offered in alternate years, interleaving with MCB 262. Students interested in computational neuroscience are encouraged to take both of these courses as they complement each other.

Lectures

  • Location: TBD
  • Times: Two 1.5 hour lectures per week.

There will be an organizational meeting at 4pm, Tuesday, August 29th, at the Redwood Center in 10 Giannini (red dot on this map). Class will begin the week of September 4th.

Email list and forum

  • Please email the GSI to be added to the class email list.
  • A bulletin board is provided here for discussion regarding lecture material, readings, and problem sets.

Grading

Based on weekly homework assignments (60%) and a final project (40%).

Required background

Prerequisites are calculus, ordinary differential equations, basic probability and statistics, and linear algebra. Familiarity with programming in a high level language, ideally Matlab, is also required.

Textbooks

  • [HKP] Hertz, J. and Krogh, A. and Palmer, R.G. Introduction to the theory of neural computation. Amazon
  • [DJCM] MacKay, D.J.C. Information Theory, Inference and Learning Algorithms. Available online or Amazon
  • [DA] Dayan, P. and Abbott, L.F. Theoretical neuroscience: computational and mathematical modeling of neural systems. Amazon

Additional reading, such as primary source material, will be suggested on a lecture by lecture basis.

Additional resources

Homework

Syllabus

Introduction

  1. Theory and modeling in neuroscience
  2. Descriptive vs. functional models
  3. Turing vs. neural computation
  • Reading: HKP chapter 1

Linear neuron models

  1. Linear systems: vectors, matrices, linear neuron models
  2. Perceptron model and linear separability
  • Reading: HKP chapter 5, DJCM chapters 38-40

Supervised learning

  1. Perceptron learning rule
  2. Adaptation in linear neurons, Widrow-Hoff rule
  3. Objective functions and gradient descent
  4. Multilayer networks and backpropagation
  • Reading: HKP chapter 6, 7, DJCM chapters 38-40, 44

Reinforcement learning

  1. Theory of associative reward-penalty
  2. Models and critics
  • Reading: HKP chapter 8, DJCM chapter 36, DA chapter 9

Unsupervised learning

  1. Linear Hebbian learning and PCA, decorrelation
  2. Winner-take-all networks and clustering
  3. Sparse, distributed coding
  • Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8, 10

Plasticity and cortical maps

  1. Self-organizing maps, Kohonen nets
  2. Models of experience dependent learning and cortical reorganization
  • Reading: HKP chapter 9, DA chapter 8

Recurrent networks

  1. Hopfield networks
  2. Pattern completion
  3. Line attractors and `bump circuits’
  4. Models of associative memory
  • Reading: HKP chapter 2-3, DJCM chapter 42, DA chapter 7

Probabilistic models and inference

  1. Probability theory and Bayes’ rule
  2. Learning and inference in generative models
  3. The mixture of Gaussians model
  4. Boltzmann machines
  5. Sparse coding and ‘ICA’
  • Reading: DJCM chapter 1-3, 20-24,41,43, DA chapter 10

Neural implementations

  1. Integrate-and-fire model
  2. Neural encoding and decoding
  3. Limits of precision in neurons
  • Reading: DA chapter 1-4, 5.4