VS298 (Fall 06): Schedule: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
 
Line 3: Line 3:
*'''Week 1 (Sept. 6,8)''':  Introduction; theory and modeling in neuroscience; linear neuron models; perceptron
*'''Week 1 (Sept. 6,8)''':  Introduction; theory and modeling in neuroscience; linear neuron models; perceptron


*'''Week 2 (Sept. 13,15)''':  guest lecture (Bell) on Wednesday, no class Friday
*'''Week 2 (Sept. 13,15)''':  guest lecture (Bell, Wednesday only)


*'''Week 3 (Sept. 20, 22)''':  Perceptron learning rule;  Widrow-Hoff rule; Objective functions and gradient descent; Multilayer networks and backpropagation
*'''Week 3 (Sept. 20, 22)''':  Perceptron learning rule;  Widrow-Hoff rule; Objective functions and gradient descent; Multilayer networks and backpropagation
Line 9: Line 9:
*'''Week 4 (Sept. 27, 29)''':  Reinforcement learning;  Theory of associative reward-penalty; Models and critics
*'''Week 4 (Sept. 27, 29)''':  Reinforcement learning;  Theory of associative reward-penalty; Models and critics


*'''Week 5 (Oct. 4, 6)''':  guest lecture (Russell) on Wednesday, no class Friday
*'''Week 5 (Oct. 4, 6)''':  guest lecture (Russell, Wednesday only)


*'''Week 6 (Oct. 11, 13)''':  Unsupervised learning; Hebbian learning and PCA; winner-take-all networks and clustering;  sparse, distributed coding
*'''Week 6 (Oct. 11, 13)''':  Unsupervised learning; Hebbian learning and PCA; winner-take-all networks and clustering;  sparse, distributed coding

Revision as of 04:42, 9 September 2006

Schedule

  • Week 1 (Sept. 6,8): Introduction; theory and modeling in neuroscience; linear neuron models; perceptron
  • Week 2 (Sept. 13,15): guest lecture (Bell, Wednesday only)
  • Week 3 (Sept. 20, 22): Perceptron learning rule; Widrow-Hoff rule; Objective functions and gradient descent; Multilayer networks and backpropagation
  • Week 4 (Sept. 27, 29): Reinforcement learning; Theory of associative reward-penalty; Models and critics
  • Week 5 (Oct. 4, 6): guest lecture (Russell, Wednesday only)
  • Week 6 (Oct. 11, 13): Unsupervised learning; Hebbian learning and PCA; winner-take-all networks and clustering; sparse, distributed coding
  • Week 7 (Oct. 18, 20): Plasticity and cortical maps; Self-organizing maps; Kohonen nets; Models of experience dependent learning and cortical reorganization
  • Week 8 (Oct. 25, 27): Recurrent networks; attractor dynamics; Hopfield networks, Pattern completion; Line attractors and `bump circuits’
  • Week 9 (Nov. 1, 3): Associative memory models
  • Week 10 (Nov. 8, 10): Probabilistic models and inference; The mixture of Gaussians model
  • Week 11 (Nov. 15, 17): Boltzmann machines
  • Week 12 (Nov. 22, 24): Sparse coding and ICA
  • Week 13 (Nov. 29, Dec. 1): Neural implementations; Integrate-and-fire model; Neural encoding and decoding; Limits of precision in neurons
  • Week 14 (Dec. 6, 8): Special topics; student projects