VS265: Syllabus Fall2012: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
(Trying to fix syllabus of 2012 to point to 2010. This moving business is madness.) |
m (Bruno moved page VS265: Syllabus to VS265: Syllabus Fall2012 without leaving a redirect) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
# | == Syllabus == | ||
==== Introduction ==== | |||
# Theory and modeling in neuroscience | |||
# Descriptive vs. functional models | |||
# Turing vs. neural computation | |||
* '''Reading''': '''HKP''' chapter 1 | |||
==== Neuron models ==== | |||
# Membrane equation, compartmental model of a neuron | |||
# Linear systems: vectors, matrices, linear neuron models | |||
# Perceptron model and linear separability | |||
* '''Reading''': '''HKP''' chapter 5, '''DJCM''' chapters 38-40 | |||
==== Supervised learning ==== | |||
# Perceptron learning rule | |||
# Adaptation in linear neurons, Widrow-Hoff rule | |||
# Objective functions and gradient descent | |||
# Multilayer networks and backpropagation | |||
* '''Reading''': '''HKP''' chapter 6, 7, '''DJCM''' chapters 38-40, 44, '''DA''' chapter 8 (sec. 4-6) | |||
==== Unsupervised learning ==== | |||
# Linear Hebbian learning and PCA, decorrelation | |||
# Winner-take-all networks and clustering | |||
# Sparse, distributed coding | |||
* '''Reading''': '''HKP''' chapter 8, '''DJCM''' chapter 36, '''DA''' chapter 8, 10 | |||
==== Plasticity and cortical maps ==== | |||
# Cortical maps | |||
# Self-organizing maps, Kohonen nets | |||
# Models of experience dependent learning and cortical reorganization | |||
# Manifold learning | |||
* '''Reading''': '''HKP''' chapter 9, '''DA''' chapter 8 | |||
==== Recurrent networks ==== | |||
# Hopfield networks | |||
# Pattern completion | |||
# Line attractors and `bump circuits’ | |||
# Models of associative memory | |||
* '''Reading''': '''HKP''' chapter 2-3, '''DJCM''' chapter 42, '''DA''' chapter 7 | |||
==== Probabilistic models and inference ==== | |||
# Probability theory and Bayes’ rule | |||
# Learning and inference in generative models | |||
# The mixture of Gaussians model | |||
# Boltzmann machines | |||
# Sparse coding and ‘ICA’ | |||
# Kalman filter model | |||
# Energy-based models | |||
* '''Reading''': '''DJCM''' chapter 1-3, 20-24,41,43, '''DA''' chapter 10 | |||
==== Neural implementations ==== | |||
# Integrate-and-fire model | |||
# Neural encoding and decoding | |||
# Limits of precision in neurons | |||
# Neural synchrony and phase-based coding | |||
* '''Reading''': '''DA''' chapter 1-4, 5.4 |
Latest revision as of 18:36, 28 August 2014
Syllabus
Introduction
- Theory and modeling in neuroscience
- Descriptive vs. functional models
- Turing vs. neural computation
- Reading: HKP chapter 1
Neuron models
- Membrane equation, compartmental model of a neuron
- Linear systems: vectors, matrices, linear neuron models
- Perceptron model and linear separability
- Reading: HKP chapter 5, DJCM chapters 38-40
Supervised learning
- Perceptron learning rule
- Adaptation in linear neurons, Widrow-Hoff rule
- Objective functions and gradient descent
- Multilayer networks and backpropagation
- Reading: HKP chapter 6, 7, DJCM chapters 38-40, 44, DA chapter 8 (sec. 4-6)
Unsupervised learning
- Linear Hebbian learning and PCA, decorrelation
- Winner-take-all networks and clustering
- Sparse, distributed coding
- Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8, 10
Plasticity and cortical maps
- Cortical maps
- Self-organizing maps, Kohonen nets
- Models of experience dependent learning and cortical reorganization
- Manifold learning
- Reading: HKP chapter 9, DA chapter 8
Recurrent networks
- Hopfield networks
- Pattern completion
- Line attractors and `bump circuits’
- Models of associative memory
- Reading: HKP chapter 2-3, DJCM chapter 42, DA chapter 7
Probabilistic models and inference
- Probability theory and Bayes’ rule
- Learning and inference in generative models
- The mixture of Gaussians model
- Boltzmann machines
- Sparse coding and ‘ICA’
- Kalman filter model
- Energy-based models
- Reading: DJCM chapter 1-3, 20-24,41,43, DA chapter 10
Neural implementations
- Integrate-and-fire model
- Neural encoding and decoding
- Limits of precision in neurons
- Neural synchrony and phase-based coding
- Reading: DA chapter 1-4, 5.4