For each lecture, we also have a list of optional reading corresponding to ideas discussed in lecture. You may read these if you are interested in the particular topic: Optional Reading
- Bell, A.J. Levels and loops: the future of artificial intelligence and neuroscience. Phil Trans: Bio Sci. 354:2013--2020 (1999) here or here
- Dreyfus, H.L. and Dreyfus, S.E. Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint. Daedalus, Winter 1988.
- Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from Analog VLSI and Neural Systems, Addison-Wesley, 1989.
- Jordan, M.I. An Introduction to Linear Algebra in Parallel Distributed Processing in McClelland and Rumelhart, Parallel Distributed Processing, MIT Press, 1985.
- Zhang K, Sejnowski TJ (2000) A universal scaling law between gray matter and white matter of cerebral cortex. PNAS, 97: 5621–5626.
- Linear neuron models
- Linear time-invariant systems and convolution
- Simulating differential equations
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex. Science, 264: 1333-1336.
Optional reading for more background:
- Handout on supervised learning in single-stage feedforward networks
- Handout on supervised learning in multi-layer feedforward networks - "backpropagation"
- Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) "Efficient BackProp," in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.).
- Pouget, A., and Sejnowski, T.J. (1997) Spatial transformations in the parietal cortex using basis functions. Journal of Cognitive Neuroscience. 9(2):222-237.
- NetTalk demo