VS265: Syllabus: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
Line 5: | Line 5: | ||
# Goals of AI/machine learning vs. theoretical neuroscience | # Goals of AI/machine learning vs. theoretical neuroscience | ||
# Turing vs. neural computation | # Turing vs. neural computation | ||
==== Neuron models ==== | ==== Neuron models ==== | ||
Line 12: | Line 11: | ||
# Linear systems: vectors, matrices, linear neuron models | # Linear systems: vectors, matrices, linear neuron models | ||
# Perceptron model and linear separability | # Perceptron model and linear separability | ||
==== Supervised learning ==== | ==== Supervised learning ==== | ||
Line 20: | Line 18: | ||
# Objective functions and gradient descent | # Objective functions and gradient descent | ||
# Multilayer networks and backpropagation | # Multilayer networks and backpropagation | ||
==== Unsupervised learning ==== | ==== Unsupervised learning ==== | ||
Line 27: | Line 24: | ||
# Winner-take-all networks and clustering | # Winner-take-all networks and clustering | ||
# Sparse, distributed coding | # Sparse, distributed coding | ||
==== Plasticity and cortical maps ==== | ==== Plasticity and cortical maps ==== | ||
Line 35: | Line 31: | ||
# Models of experience dependent learning and cortical reorganization | # Models of experience dependent learning and cortical reorganization | ||
# Manifold learning | # Manifold learning | ||
==== Recurrent networks ==== | ==== Recurrent networks ==== | ||
Line 42: | Line 37: | ||
# Line attractors and `bump circuits’ | # Line attractors and `bump circuits’ | ||
# Dynamical models | # Dynamical models | ||
==== Probabilistic models and inference ==== | ==== Probabilistic models and inference ==== | ||
Line 53: | Line 47: | ||
# Kalman filter model | # Kalman filter model | ||
# Energy-based models | # Energy-based models | ||
==== Neural implementations ==== | ==== Neural implementations ==== | ||
Line 61: | Line 54: | ||
# Limits of precision in neurons | # Limits of precision in neurons | ||
# Neural synchrony and phase-based coding | # Neural synchrony and phase-based coding | ||
Revision as of 04:33, 1 September 2014
Syllabus
Introduction
- Theory and modeling in neuroscience
- Goals of AI/machine learning vs. theoretical neuroscience
- Turing vs. neural computation
Neuron models
- Membrane equation, compartmental model of a neuron
- Linear systems: vectors, matrices, linear neuron models
- Perceptron model and linear separability
Supervised learning
- Perceptron learning rule
- Adaptation in linear neurons, Widrow-Hoff rule
- Objective functions and gradient descent
- Multilayer networks and backpropagation
Unsupervised learning
- Linear Hebbian learning and PCA, decorrelation
- Winner-take-all networks and clustering
- Sparse, distributed coding
Plasticity and cortical maps
- Cortical maps
- Self-organizing maps, Kohonen nets
- Models of experience dependent learning and cortical reorganization
- Manifold learning
Recurrent networks
- Hopfield networks
- Models of associative memory, pattern completion
- Line attractors and `bump circuits’
- Dynamical models
Probabilistic models and inference
- Probability theory and Bayes’ rule
- Learning and inference in generative models
- The mixture of Gaussians model
- Boltzmann machines
- Sparse coding and ‘ICA’
- Kalman filter model
- Energy-based models
Neural implementations
- Integrate-and-fire model
- Neural encoding and decoding
- Limits of precision in neurons
- Neural synchrony and phase-based coding