VS265: Syllabus: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
No edit summary
 
(17 intermediate revisions by the same user not shown)
Line 1: Line 1:
== Syllabus ==
== Syllabus ==


==== Introduction ====
==== Aug. 28: Introduction ====
# Theory and modeling in neuroscience
* Theory and modeling in neuroscience
# Goals of AI/machine learning vs. theoretical neuroscience
* Goals of AI/machine learning vs. theoretical neuroscience
# Turing vs. neural computation
* Turing vs. neural computation
* '''Reading''': '''HKP''' chapter 1


==== Neuron models ====
==== Sept. 2,4: Neuron models ====


# Membrane equation, compartmental model of a neuron
* Membrane equation, compartmental model of a neuron
# Linear systems: vectors, matrices, linear neuron models
* Linear systems: vectors, matrices, linear neuron models
# Perceptron model and linear separability
* Perceptron model and linear separability
* '''Reading''': '''HKP''' chapter 5, '''DJCM''' chapters 38-40


==== Supervised learning ====
==== Sept. 9,11: Guest lectures ====


# Perceptron learning rule
* Matlab/Python tutorial
# Adaptation in linear neurons, Widrow-Hoff rule
* Paul Rhodes, Evolved Machines: Multi-compartment models; dendritic integration
# Objective functions and gradient descent
# Multilayer networks and backpropagation
* '''Reading''': '''HKP''' chapter 6, '''DJCM''' chapters 38-40, 44, '''DA''' chapter 8 (sec. 4-6)


==== Unsupervised learning ====
==== Sept. 16,18: Supervised learning ====


# Linear Hebbian learning and PCA, decorrelation
* Perceptron learning rule
# Winner-take-all networks and clustering
* Adaptation in linear neurons, Widrow-Hoff rule
# Sparse, distributed coding
* Objective functions and gradient descent
* '''Reading''': '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 8, 10
* Multilayer networks and backpropagation


==== Plasticity and cortical maps ====
==== Sept. 23,25: Unsupervised learning ====


# Cortical maps
* Linear Hebbian learning and PCA, decorrelation
# Self-organizing maps, Kohonen nets
* Winner-take-all networks and clustering
# Models of experience dependent learning and cortical reorganization
# Manifold learning
* '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8


==== Recurrent networks ====
==== Sept. 30, Oct. 2:  Guest lecture ====
# Hopfield networks
# Models of associative memory, pattern completion
# Line attractors and `bump circuits’
# Dynamical models
* '''Reading''': '''HKP''' chapter 2-3, '''DJCM''' chapter 42, '''DA''' chapter 7


==== Probabilistic models and inference ====
* Fritz Sommer: Associative memories and attractor neural networks


# Probability theory and Bayes’ rule
==== Oct. 7,9: Guest lectures ====
# Learning and inference in generative models
# The mixture of Gaussians model
# Boltzmann machines
# Sparse coding and ‘ICA’
# Kalman filter model
# Energy-based models
* '''Reading''': '''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10


==== Neural implementations ====
* Jerry Feldman: Ecological utility  and the mythical neural code
* Pentti Kanerva: Computing with 10,000 bits


# Integrate-and-fire model
==== Oct. 14: Unsupervised learning (continued) ====
# Neural encoding and decoding
 
# Limits of precision in neurons
==== Oct. 16: Guest lecture ====
# Neural synchrony and phase-based coding
 
* '''Reading''': '''DA''' chapter 1-4, 5.4
* Tom Dean, Google:  Connectomics
 
==== Oct. 21,23,28:  Sparse, distributed coding ====
 
* Autoencoders
* Natural image statistics
* Projection pursuit
 
==== Oct. 30, Nov. 4:  Plasticity and cortical maps ====
 
* Cortical maps
* Self-organizing maps, Kohonen nets
* Models of experience dependent learning and cortical reorganization
 
==== Nov. 6:  Manifold learning ====
 
* Local linear embedding, Isomap
 
==== Nov. 13:  Recurrent networks ====
 
* Hopfield networks, memories as 'basis of attraction'
* Line attractors and `bump circuits’
* Dynamical models
 
==== Nov. 18,20,25, Dec. 2:  Probabilistic models and inference ====
 
* Probability theory and Bayes’ rule
* Learning and inference in generative models
* The mixture of Gaussians model
* Boltzmann machines
* Kalman filter model
* Energy-based models
 
==== Dec. 4:  Guest lecture (Tony Bell) ====
* Sparse coding and ‘ICA’
 
==== Dec. 9:  Neural implementations ====
 
* Integrate-and-fire model
* Neural encoding and decoding
* Limits of precision in neurons
<!-- * Neural synchrony and phase-based coding -->
 
==== Dec. 11:  Guest lecture (Tony Bell) ====
* Levels and loops

Latest revision as of 07:06, 10 December 2014

Syllabus

Aug. 28: Introduction

  • Theory and modeling in neuroscience
  • Goals of AI/machine learning vs. theoretical neuroscience
  • Turing vs. neural computation

Sept. 2,4: Neuron models

  • Membrane equation, compartmental model of a neuron
  • Linear systems: vectors, matrices, linear neuron models
  • Perceptron model and linear separability

Sept. 9,11: Guest lectures

  • Matlab/Python tutorial
  • Paul Rhodes, Evolved Machines: Multi-compartment models; dendritic integration

Sept. 16,18: Supervised learning

  • Perceptron learning rule
  • Adaptation in linear neurons, Widrow-Hoff rule
  • Objective functions and gradient descent
  • Multilayer networks and backpropagation

Sept. 23,25: Unsupervised learning

  • Linear Hebbian learning and PCA, decorrelation
  • Winner-take-all networks and clustering

Sept. 30, Oct. 2: Guest lecture

  • Fritz Sommer: Associative memories and attractor neural networks

Oct. 7,9: Guest lectures

  • Jerry Feldman: Ecological utility and the mythical neural code
  • Pentti Kanerva: Computing with 10,000 bits

Oct. 14: Unsupervised learning (continued)

Oct. 16: Guest lecture

  • Tom Dean, Google: Connectomics

Oct. 21,23,28: Sparse, distributed coding

  • Autoencoders
  • Natural image statistics
  • Projection pursuit

Oct. 30, Nov. 4: Plasticity and cortical maps

  • Cortical maps
  • Self-organizing maps, Kohonen nets
  • Models of experience dependent learning and cortical reorganization

Nov. 6: Manifold learning

  • Local linear embedding, Isomap

Nov. 13: Recurrent networks

  • Hopfield networks, memories as 'basis of attraction'
  • Line attractors and `bump circuits’
  • Dynamical models

Nov. 18,20,25, Dec. 2: Probabilistic models and inference

  • Probability theory and Bayes’ rule
  • Learning and inference in generative models
  • The mixture of Gaussians model
  • Boltzmann machines
  • Kalman filter model
  • Energy-based models

Dec. 4: Guest lecture (Tony Bell)

  • Sparse coding and ‘ICA’

Dec. 9: Neural implementations

  • Integrate-and-fire model
  • Neural encoding and decoding
  • Limits of precision in neurons

Dec. 11: Guest lecture (Tony Bell)

  • Levels and loops