VS265: Syllabus: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
Line 1: | Line 1: | ||
== Syllabus == | == Syllabus == | ||
==== Introduction ==== | ==== Aug. 28: Introduction ==== | ||
# Theory and modeling in neuroscience | # Theory and modeling in neuroscience | ||
# Goals of AI/machine learning vs. theoretical neuroscience | # Goals of AI/machine learning vs. theoretical neuroscience | ||
# Turing vs. neural computation | # Turing vs. neural computation | ||
==== Neuron models ==== | ==== Sept. 2,4: Neuron models ==== | ||
# Membrane equation, compartmental model of a neuron | # Membrane equation, compartmental model of a neuron | ||
Line 12: | Line 12: | ||
# Perceptron model and linear separability | # Perceptron model and linear separability | ||
==== Supervised learning ==== | ==== Sept. 9,11: Guest lectures ==== | ||
# TBD | |||
# Paul Rhodes, Evolved Machines: Multi-compartment models; dendritic integration | |||
==== Sept. 16,18: Supervised learning ==== | |||
# Perceptron learning rule | # Perceptron learning rule | ||
Line 19: | Line 24: | ||
# Multilayer networks and backpropagation | # Multilayer networks and backpropagation | ||
==== Unsupervised learning ==== | ==== Sept. 23,25: Unsupervised learning ==== | ||
# Linear Hebbian learning and PCA, decorrelation | # Linear Hebbian learning and PCA, decorrelation | ||
# Winner-take-all networks and clustering | # Winner-take-all networks and clustering | ||
==== Plasticity and cortical maps ==== | ==== Sept. 30: Guest lecture ==== | ||
# TBD | |||
==== Oct. 2: Sparse, distributed coding ==== | |||
# Autoencoders | |||
# Natural image statistics | |||
# Projection pursuit | |||
==== Oct. 7: Plasticity and cortical maps ==== | |||
# Cortical maps | # Cortical maps | ||
# Self-organizing maps, Kohonen nets | # Self-organizing maps, Kohonen nets | ||
# Models of experience dependent learning and cortical reorganization | # Models of experience dependent learning and cortical reorganization | ||
==== Recurrent networks ==== | ==== Oct. 9: Guest lecture ==== | ||
# TBD | |||
==== Oct. 14: Manifold learning ==== | |||
# Local linear embedding and Isomap | |||
==== Oct. 16: Guest lecture ==== | |||
# Tom Dean, Google: Connectomics | |||
==== Oct. 21,23,28,30: Recurrent networks ==== | |||
# Hopfield networks | # Hopfield networks | ||
# Models of associative memory, pattern completion | # Models of associative memory, pattern completion | ||
Line 38: | Line 63: | ||
# Dynamical models | # Dynamical models | ||
==== Probabilistic models and inference ==== | ==== Nov. 4,6,13,18,20,25: Probabilistic models and inference ==== | ||
# Probability theory and Bayes’ rule | # Probability theory and Bayes’ rule | ||
Line 48: | Line 73: | ||
# Energy-based models | # Energy-based models | ||
==== Neural implementations ==== | ==== Dec. 2,4: Neural implementations ==== | ||
# Integrate-and-fire model | # Integrate-and-fire model | ||
Line 54: | Line 79: | ||
# Limits of precision in neurons | # Limits of precision in neurons | ||
# Neural synchrony and phase-based coding | # Neural synchrony and phase-based coding | ||
==== Dec. 9,11: Guest lectures ==== | |||
# TBD | |||
# TBD |
Revision as of 05:39, 1 September 2014
Syllabus
Aug. 28: Introduction
- Theory and modeling in neuroscience
- Goals of AI/machine learning vs. theoretical neuroscience
- Turing vs. neural computation
Sept. 2,4: Neuron models
- Membrane equation, compartmental model of a neuron
- Linear systems: vectors, matrices, linear neuron models
- Perceptron model and linear separability
Sept. 9,11: Guest lectures
- TBD
- Paul Rhodes, Evolved Machines: Multi-compartment models; dendritic integration
Sept. 16,18: Supervised learning
- Perceptron learning rule
- Adaptation in linear neurons, Widrow-Hoff rule
- Objective functions and gradient descent
- Multilayer networks and backpropagation
Sept. 23,25: Unsupervised learning
- Linear Hebbian learning and PCA, decorrelation
- Winner-take-all networks and clustering
Sept. 30: Guest lecture
- TBD
Oct. 2: Sparse, distributed coding
- Autoencoders
- Natural image statistics
- Projection pursuit
Oct. 7: Plasticity and cortical maps
- Cortical maps
- Self-organizing maps, Kohonen nets
- Models of experience dependent learning and cortical reorganization
Oct. 9: Guest lecture
- TBD
Oct. 14: Manifold learning
- Local linear embedding and Isomap
Oct. 16: Guest lecture
- Tom Dean, Google: Connectomics
Oct. 21,23,28,30: Recurrent networks
- Hopfield networks
- Models of associative memory, pattern completion
- Line attractors and `bump circuits’
- Dynamical models
Nov. 4,6,13,18,20,25: Probabilistic models and inference
- Probability theory and Bayes’ rule
- Learning and inference in generative models
- The mixture of Gaussians model
- Boltzmann machines
- Sparse coding and ‘ICA’
- Kalman filter model
- Energy-based models
Dec. 2,4: Neural implementations
- Integrate-and-fire model
- Neural encoding and decoding
- Limits of precision in neurons
- Neural synchrony and phase-based coding
Dec. 9,11: Guest lectures
- TBD
- TBD