VS298 (Fall 06): Syllabus

From RedwoodCenter
Revision as of 18:04, 27 September 2006 by Bruno (talk | contribs) (→‎Reinforcement learning)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

Syllabus

Introduction

  1. Theory and modeling in neuroscience
  2. Descriptive vs. functional models
  3. Turing vs. neural computation
  • Reading: HKP chapter 1

Linear neuron models

  1. Linear systems: vectors, matrices, linear neuron models
  2. Perceptron model and linear separability
  • Reading: HKP chapter 5, DJCM chapters 38-40

Supervised learning

  1. Perceptron learning rule
  2. Adaptation in linear neurons, Widrow-Hoff rule
  3. Objective functions and gradient descent
  4. Multilayer networks and backpropagation
  • Reading: HKP chapter 6, 7, DJCM chapters 38-40, 44

Reinforcement learning

  1. Classical conditioning and Rescorla-Wagner rule
  2. Temporal difference learning
  3. Actor-critic learning
  • Reading: DA chapter 9

Unsupervised learning

  1. Linear Hebbian learning and PCA, decorrelation
  2. Winner-take-all networks and clustering
  3. Sparse, distributed coding
  • Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8, 10

Plasticity and cortical maps

  1. Self-organizing maps, Kohonen nets
  2. Models of experience dependent learning and cortical reorganization
  • Reading: HKP chapter 9, DA chapter 8

Recurrent networks

  1. Hopfield networks
  2. Pattern completion
  3. Line attractors and `bump circuits’
  4. Models of associative memory
  • Reading: HKP chapter 2-3, DJCM chapter 42, DA chapter 7

Probabilistic models and inference

  1. Probability theory and Bayes’ rule
  2. Learning and inference in generative models
  3. The mixture of Gaussians model
  4. Boltzmann machines
  5. Sparse coding and ‘ICA’
  • Reading: DJCM chapter 1-3, 20-24,41,43, DA chapter 10

Neural implementations

  1. Integrate-and-fire model
  2. Neural encoding and decoding
  3. Limits of precision in neurons
  • Reading: DA chapter 1-4, 5.4