VS265: Syllabus

From RedwoodCenter
Revision as of 04:33, 1 September 2014 by Bruno (talk | contribs) (→‎Syllabus)
Jump to navigationJump to search

Syllabus

Introduction

  1. Theory and modeling in neuroscience
  2. Goals of AI/machine learning vs. theoretical neuroscience
  3. Turing vs. neural computation

Neuron models

  1. Membrane equation, compartmental model of a neuron
  2. Linear systems: vectors, matrices, linear neuron models
  3. Perceptron model and linear separability

Supervised learning

  1. Perceptron learning rule
  2. Adaptation in linear neurons, Widrow-Hoff rule
  3. Objective functions and gradient descent
  4. Multilayer networks and backpropagation

Unsupervised learning

  1. Linear Hebbian learning and PCA, decorrelation
  2. Winner-take-all networks and clustering
  3. Sparse, distributed coding

Plasticity and cortical maps

  1. Cortical maps
  2. Self-organizing maps, Kohonen nets
  3. Models of experience dependent learning and cortical reorganization
  4. Manifold learning

Recurrent networks

  1. Hopfield networks
  2. Models of associative memory, pattern completion
  3. Line attractors and `bump circuits’
  4. Dynamical models

Probabilistic models and inference

  1. Probability theory and Bayes’ rule
  2. Learning and inference in generative models
  3. The mixture of Gaussians model
  4. Boltzmann machines
  5. Sparse coding and ‘ICA’
  6. Kalman filter model
  7. Energy-based models

Neural implementations

  1. Integrate-and-fire model
  2. Neural encoding and decoding
  3. Limits of precision in neurons
  4. Neural synchrony and phase-based coding