VS265: Syllabus: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
Line 42: | Line 42: | ||
# Line attractors and `bump circuits’ | # Line attractors and `bump circuits’ | ||
# Dynamical models | # Dynamical models | ||
* '''Reading''': '''HKP''' chapters 2, 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3) '''DJCM''' chapter 42, '''DA''' chapter 7 | * '''Reading''': '''HKP''' chapters 2, 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3), '''DJCM''' chapter 42, '''DA''' chapter 7 | ||
==== Probabilistic models and inference ==== | ==== Probabilistic models and inference ==== |
Revision as of 19:33, 28 August 2014
Syllabus
Introduction
- Theory and modeling in neuroscience
- Goals of AI/machine learning vs. theoretical neuroscience
- Turing vs. neural computation
- Reading: HKP chapter 1
Neuron models
- Membrane equation, compartmental model of a neuron
- Linear systems: vectors, matrices, linear neuron models
- Perceptron model and linear separability
- Reading: HKP chapter 5, DJCM chapters 38-40
Supervised learning
- Perceptron learning rule
- Adaptation in linear neurons, Widrow-Hoff rule
- Objective functions and gradient descent
- Multilayer networks and backpropagation
- Reading: HKP chapter 6, DJCM chapters 38-40, 44, DA chapter 8 (sec. 4-6)
Unsupervised learning
- Linear Hebbian learning and PCA, decorrelation
- Winner-take-all networks and clustering
- Sparse, distributed coding
- Reading: HKP chapter 8, DJCM chapter 36, DA chapter 8, 10
Plasticity and cortical maps
- Cortical maps
- Self-organizing maps, Kohonen nets
- Models of experience dependent learning and cortical reorganization
- Manifold learning
- Reading: HKP chapter 9, DA chapter 8
Recurrent networks
- Hopfield networks
- Models of associative memory, pattern completion
- Line attractors and `bump circuits’
- Dynamical models
- Reading: HKP chapters 2, 3 (sec. 3.3-3.5), 7 (sec. 7.2-7.3), DJCM chapter 42, DA chapter 7
Probabilistic models and inference
- Probability theory and Bayes’ rule
- Learning and inference in generative models
- The mixture of Gaussians model
- Boltzmann machines
- Sparse coding and ‘ICA’
- Kalman filter model
- Energy-based models
- Reading: DJCM chapter 1-3, 20-24,41,43, DA chapter 10
Neural implementations
- Integrate-and-fire model
- Neural encoding and decoding
- Limits of precision in neurons
- Neural synchrony and phase-based coding
- Reading: DA chapter 1-4, 5.4