VS265: Reading: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
Line 62: Line 62:


====Sept 30, Oct 2: Attractor Networks and Associative Memories ====
====Sept 30, Oct 2: Attractor Networks and Associative Memories ====
* [http://redwood.berkeley.edu/vs265/attractor-networks.pdf Handout]
* [http://redwood.berkeley.edu/vs265/attractor-networks.pdf Handout] on attractor networks - their learning, dynamics and how they differ from feed-forward networks
* "HKP" Chapter 2 and 3
* "HKP" Chapter 2 and 3
* [http://redwood.berkeley.edu/vs265/hopfield82.pdf Hopfield82]
* [http://redwood.berkeley.edu/vs265/hopfield82.pdf Hopfield82]
* [http://redwood.berkeley.edu/vs265/hopfield84.pdf Hopfield84]
* [http://redwood.berkeley.edu/vs265/hopfield84.pdf Hopfield84]
* [https://www.dropbox.com/s/mh40nnj5d53o183/222960a0.pdf?dl=0 Willshaw69]
* [https://www.dropbox.com/s/mh40nnj5d53o183/222960a0.pdf?dl=0 Willshaw69]

Revision as of 09:03, 3 October 2014

Aug 28: Introduction

Optional:

Sept 2: Neuron models

Background reading on dynamics, linear time-invariant systems and convolution, and differential equations:

Sept 4: Linear neuron, Perceptron

Background on linear algebra:

Sept 11: Multicompartment models, dendritic integration

Sept. 16, 18: Supervised learning

  • HKP Chapters 5, 6
  • Handout on supervised learning in single-stage feedforward networks
  • Handout on supervised learning in multi-layer feedforward networks - "back propagation"

Further reading:

Sept. 23, 24: Unsupervised learning

  • HKP Chapters 8 and 9, DJCM chapter 36, DA chapter 8, 10
  • Handout: Hebbian learning and PCA
  • PDP Chapter 9 (full text of Michael Jordan's tutorial on linear algebra, including section on eigenvectors)

Optional:




Sept 30, Oct 2: Attractor Networks and Associative Memories