VS265: Syllabus: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
(Created page with "== Syllabus == ==== Introduction ==== # Theory and modeling in neuroscience # Descriptive vs. functional models # Turing vs. neural computation * '''Reading''': '''HKP''' cha...") |
No edit summary |
||
(20 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
== Syllabus == | == Syllabus == | ||
==== Introduction ==== | ==== Aug. 28: Introduction ==== | ||
* Theory and modeling in neuroscience | |||
* Goals of AI/machine learning vs. theoretical neuroscience | |||
* Turing vs. neural computation | |||
==== Neuron models ==== | ==== Sept. 2,4: Neuron models ==== | ||
* Membrane equation, compartmental model of a neuron | |||
* Linear systems: vectors, matrices, linear neuron models | |||
* Perceptron model and linear separability | |||
==== | ==== Sept. 9,11: Guest lectures ==== | ||
* Matlab/Python tutorial | |||
* Paul Rhodes, Evolved Machines: Multi-compartment models; dendritic integration | |||
==== | ==== Sept. 16,18: Supervised learning ==== | ||
* Perceptron learning rule | |||
* Adaptation in linear neurons, Widrow-Hoff rule | |||
* Objective functions and gradient descent | |||
* | * Multilayer networks and backpropagation | ||
==== | ==== Sept. 23,25: Unsupervised learning ==== | ||
* Linear Hebbian learning and PCA, decorrelation | |||
* Winner-take-all networks and clustering | |||
* | |||
==== | ==== Sept. 30, Oct. 2: Guest lecture ==== | ||
* Fritz Sommer: Associative memories and attractor neural networks | |||
==== Oct. 7,9: Guest lectures ==== | |||
* Jerry Feldman: Ecological utility and the mythical neural code | |||
* Pentti Kanerva: Computing with 10,000 bits | |||
==== Oct. 14: Unsupervised learning (continued) ==== | |||
==== Oct. 16: Guest lecture ==== | |||
* | * Tom Dean, Google: Connectomics | ||
==== Oct. 21,23,28: Sparse, distributed coding ==== | |||
* Autoencoders | |||
* Natural image statistics | |||
* Projection pursuit | |||
==== Oct. 30, Nov. 4: Plasticity and cortical maps ==== | |||
* Cortical maps | |||
* Self-organizing maps, Kohonen nets | |||
* Models of experience dependent learning and cortical reorganization | |||
==== Nov. 6: Manifold learning ==== | |||
* Local linear embedding, Isomap | |||
==== Nov. 13: Recurrent networks ==== | |||
* Hopfield networks, memories as 'basis of attraction' | |||
* Line attractors and `bump circuits’ | |||
* Dynamical models | |||
==== Nov. 18,20,25, Dec. 2: Probabilistic models and inference ==== | |||
* Probability theory and Bayes’ rule | |||
* Learning and inference in generative models | |||
* The mixture of Gaussians model | |||
* Boltzmann machines | |||
* Kalman filter model | |||
* Energy-based models | |||
==== Dec. 4: Guest lecture (Tony Bell) ==== | |||
* Sparse coding and ‘ICA’ | |||
==== Dec. 9: Neural implementations ==== | |||
* Integrate-and-fire model | |||
* Neural encoding and decoding | |||
* Limits of precision in neurons | |||
<!-- * Neural synchrony and phase-based coding --> | |||
==== Dec. 11: Guest lecture (Tony Bell) ==== | |||
* Levels and loops |
Latest revision as of 07:06, 10 December 2014
Syllabus
Aug. 28: Introduction
- Theory and modeling in neuroscience
- Goals of AI/machine learning vs. theoretical neuroscience
- Turing vs. neural computation
Sept. 2,4: Neuron models
- Membrane equation, compartmental model of a neuron
- Linear systems: vectors, matrices, linear neuron models
- Perceptron model and linear separability
Sept. 9,11: Guest lectures
- Matlab/Python tutorial
- Paul Rhodes, Evolved Machines: Multi-compartment models; dendritic integration
Sept. 16,18: Supervised learning
- Perceptron learning rule
- Adaptation in linear neurons, Widrow-Hoff rule
- Objective functions and gradient descent
- Multilayer networks and backpropagation
Sept. 23,25: Unsupervised learning
- Linear Hebbian learning and PCA, decorrelation
- Winner-take-all networks and clustering
Sept. 30, Oct. 2: Guest lecture
- Fritz Sommer: Associative memories and attractor neural networks
Oct. 7,9: Guest lectures
- Jerry Feldman: Ecological utility and the mythical neural code
- Pentti Kanerva: Computing with 10,000 bits
Oct. 14: Unsupervised learning (continued)
Oct. 16: Guest lecture
- Tom Dean, Google: Connectomics
Oct. 21,23,28: Sparse, distributed coding
- Autoencoders
- Natural image statistics
- Projection pursuit
Oct. 30, Nov. 4: Plasticity and cortical maps
- Cortical maps
- Self-organizing maps, Kohonen nets
- Models of experience dependent learning and cortical reorganization
Nov. 6: Manifold learning
- Local linear embedding, Isomap
Nov. 13: Recurrent networks
- Hopfield networks, memories as 'basis of attraction'
- Line attractors and `bump circuits’
- Dynamical models
Nov. 18,20,25, Dec. 2: Probabilistic models and inference
- Probability theory and Bayes’ rule
- Learning and inference in generative models
- The mixture of Gaussians model
- Boltzmann machines
- Kalman filter model
- Energy-based models
Dec. 4: Guest lecture (Tony Bell)
- Sparse coding and ‘ICA’
Dec. 9: Neural implementations
- Integrate-and-fire model
- Neural encoding and decoding
- Limits of precision in neurons
Dec. 11: Guest lecture (Tony Bell)
- Levels and loops