VS265: Syllabus Fall2012

From RedwoodCenter
Jump to navigationJump to search

Syllabus

Introduction

  1. Theory and modeling in neuroscience
  2. Descriptive vs. functional models
  3. Turing vs. neural computation
  • Reading: HKP chapter 1

Neuron models

  1. Membrane equation, compartmental model of a neuron
  2. Linear systems: vectors, matrices, linear neuron models
  3. Perceptron model and linear separability
  • Reading: HKP chapter 5, DJCM chapters 38-40

Supervised learning

  1. Perceptron learning rule
  2. Adaptation in linear neurons, Widrow-Hoff rule
  3. Objective functions and gradient descent
  4. Multilayer networks and backpropagation
  • Reading: HKP chapter 6, 7, DJCM chapters 38-40, 44, DA chapter 8 (sec. 4-6)

Unsupervised learning

  1. Linear Hebbian learning and PCA, decorrelation
  2. Winner-take-all networks and clustering
  3. Sparse, distributed coding
  • Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8, 10

Plasticity and cortical maps

  1. Cortical maps
  2. Self-organizing maps, Kohonen nets
  3. Models of experience dependent learning and cortical reorganization
  4. Manifold learning
  • Reading: HKP chapter 9, DA chapter 8

Recurrent networks

  1. Hopfield networks
  2. Pattern completion
  3. Line attractors and `bump circuits’
  4. Models of associative memory
  • Reading: HKP chapter 2-3, DJCM chapter 42, DA chapter 7

Probabilistic models and inference

  1. Probability theory and Bayes’ rule
  2. Learning and inference in generative models
  3. The mixture of Gaussians model
  4. Boltzmann machines
  5. Sparse coding and ‘ICA’
  6. Kalman filter model
  7. Energy-based models
  • Reading: DJCM chapter 1-3, 20-24,41,43, DA chapter 10

Neural implementations

  1. Integrate-and-fire model
  2. Neural encoding and decoding
  3. Limits of precision in neurons
  4. Neural synchrony and phase-based coding
  • Reading: DA chapter 1-4, 5.4