For each lecture, we also have a list of optional reading corresponding to ideas discussed in lecture. You may read these if you are interested in the particular topic: Optional Reading
- Bell, A.J. Levels and loops: the future of artificial intelligence and neuroscience. Phil Trans: Bio Sci. 354:2013--2020 (1999) here or here
- Dreyfus, H.L. and Dreyfus, S.E. Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint. Daedalus, Winter 1988.
- Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from Analog VLSI and Neural Systems, Addison-Wesley, 1989.
- Jordan, M.I. An Introduction to Linear Algebra in Parallel Distributed Processing in McClelland and Rumelhart, Parallel Distributed Processing, MIT Press, 1985.
- Zhang K, Sejnowski TJ (2000) A universal scaling law between gray matter and white matter of cerebral cortex. PNAS, 97: 5621–5626.
- Linear neuron models
- Linear time-invariant systems and convolution
- Simulating differential equations
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex. Science, 264: 1333-1336.
Optional reading for more background:
- Handout on supervised learning in single-stage feedforward networks
- Handout on supervised learning in multi-layer feedforward networks - "backpropagation"
- Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) "Efficient BackProp," in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.).
- NetTalk demo
- Handout: Hebbian learning and PCA
- HKP Chapter 8
- PDP Chapter 9 (full text of Michael Jordan's tutorial on linear algebra, including section on eigenvectors)
- Foldiak, P. Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 64, 165-170 (1990).
- HKP Chapter 9
- Olshausen BA, Field DJ Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, 381: 607-609. (1996)