VS298: Syllabus: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
No edit summary
 
 
(6 intermediate revisions by the same user not shown)
Line 7: Line 7:
* '''Reading''': '''HKP''' chapter 1
* '''Reading''': '''HKP''' chapter 1


==== Linear neuron models ====
==== Neuron models ====


# Linear systems: vectors, matrices, linear neuron models
# Linear systems: vectors, matrices, linear neuron models
Line 19: Line 19:
# Objective functions and gradient descent
# Objective functions and gradient descent
# Multilayer networks and backpropagation
# Multilayer networks and backpropagation
* '''Reading''': '''HKP''' chapter 6, 7, '''DJCM''' chapters 38-40, 44
* '''Reading''': '''HKP''' chapter 6, 7, '''DJCM''' chapters 38-40, 44, '''DA''' chapter 8 (sec. 4-6)
 
==== Reinforcement learning ====
 
# Classical conditioning and Rescorla-Wagner rule
# Temporal difference learning
# Actor-critic learning
* '''Reading''': '''DA''' chapter 9


==== Unsupervised learning ====
==== Unsupervised learning ====
Line 37: Line 30:
==== Plasticity and cortical maps ====
==== Plasticity and cortical maps ====


# Cortical maps
# Self-organizing maps, Kohonen nets
# Self-organizing maps, Kohonen nets
# Models of experience dependent learning and cortical reorganization
# Models of experience dependent learning and cortical reorganization
# Manifold learning
* '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8
* '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8


Line 56: Line 51:
# Boltzmann machines
# Boltzmann machines
# Sparse coding and ‘ICA’
# Sparse coding and ‘ICA’
# Kalman filter model
# Energy-based models
* '''Reading''': '''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10
* '''Reading''': '''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10


Line 63: Line 60:
# Neural encoding and decoding
# Neural encoding and decoding
# Limits of precision in neurons
# Limits of precision in neurons
# Neural synchrony and phase-based coding
* '''Reading''': '''DA''' chapter 1-4, 5.4
* '''Reading''': '''DA''' chapter 1-4, 5.4

Latest revision as of 06:23, 4 November 2008

Syllabus

Introduction

  1. Theory and modeling in neuroscience
  2. Descriptive vs. functional models
  3. Turing vs. neural computation
  • Reading: HKP chapter 1

Neuron models

  1. Linear systems: vectors, matrices, linear neuron models
  2. Perceptron model and linear separability
  • Reading: HKP chapter 5, DJCM chapters 38-40

Supervised learning

  1. Perceptron learning rule
  2. Adaptation in linear neurons, Widrow-Hoff rule
  3. Objective functions and gradient descent
  4. Multilayer networks and backpropagation
  • Reading: HKP chapter 6, 7, DJCM chapters 38-40, 44, DA chapter 8 (sec. 4-6)

Unsupervised learning

  1. Linear Hebbian learning and PCA, decorrelation
  2. Winner-take-all networks and clustering
  3. Sparse, distributed coding
  • Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8, 10

Plasticity and cortical maps

  1. Cortical maps
  2. Self-organizing maps, Kohonen nets
  3. Models of experience dependent learning and cortical reorganization
  4. Manifold learning
  • Reading: HKP chapter 9, DA chapter 8

Recurrent networks

  1. Hopfield networks
  2. Pattern completion
  3. Line attractors and `bump circuits’
  4. Models of associative memory
  • Reading: HKP chapter 2-3, DJCM chapter 42, DA chapter 7

Probabilistic models and inference

  1. Probability theory and Bayes’ rule
  2. Learning and inference in generative models
  3. The mixture of Gaussians model
  4. Boltzmann machines
  5. Sparse coding and ‘ICA’
  6. Kalman filter model
  7. Energy-based models
  • Reading: DJCM chapter 1-3, 20-24,41,43, DA chapter 10

Neural implementations

  1. Integrate-and-fire model
  2. Neural encoding and decoding
  3. Limits of precision in neurons
  4. Neural synchrony and phase-based coding
  • Reading: DA chapter 1-4, 5.4