# Difference between revisions of "VS298: Reading"

From RedwoodCenter

Jump to navigationJump to search (→18 Sep) |
(→25 Sep) |
||

(2 intermediate revisions by the same user not shown) | |||

Line 25: | Line 25: | ||

* Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) [http://connes.berkeley.edu/~amir/vs298/lecun-98b.pdf "Efficient BackProp,"] in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.). | * Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) [http://connes.berkeley.edu/~amir/vs298/lecun-98b.pdf "Efficient BackProp,"] in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.). | ||

* [http://www.cnl.salk.edu/ParallelNetsPronounce/index.php NetTalk demo] | * [http://www.cnl.salk.edu/ParallelNetsPronounce/index.php NetTalk demo] | ||

+ | |||

+ | ==== 23 Sep ==== | ||

+ | * Handout: [http://connes.berkeley.edu/~amir/vs298/hebb-pca.pdf Hebbian learning and PCA] | ||

+ | * '''HKP''' Chapter 8 | ||

+ | * '''PDP''' [http://connes.berkeley.edu/~amir/vs298/chap9.pdf Chapter 9] (full text of Michael Jordan's tutorial on linear algebra, including section on eigenvectors) | ||

+ | |||

+ | ==== 30 Sep ==== | ||

+ | * Foldiak, P. [http://connes.berkeley.edu/~amir/vs298/foldiak90.pdf Forming sparse representations by local anti-Hebbian learning]. Biol. Cybern. 64, 165-170 (1990). | ||

+ | * '''HKP''' Chapter 9 | ||

+ | * Olshausen BA, Field DJ Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, 381: 607-609. (1996) |

## Revision as of 17:57, 2 October 2008

For each lecture, we also have a list of optional reading corresponding to ideas discussed in lecture. You may read these if you are interested in the particular topic: Optional Reading

#### 2 Sep

- Bell, A.J.
*Levels and loops: the future of artificial intelligence and neuroscience*. Phil Trans: Bio Sci.**354**:2013--2020 (1999) here or here - Dreyfus, H.L. and Dreyfus, S.E.
*Making a Mind vs. Modeling the Brain: Artificial Intelligence Back at a Branchpoint*. Daedalus, Winter 1988. - Mead, C. Chapter 1: Introduction and Chapter 4: Neurons from
*Analog VLSI and Neural Systems*, Addison-Wesley, 1989. - Jordan, M.I. An Introduction to Linear Algebra in Parallel Distributed Processing in McClelland and Rumelhart,
*Parallel Distributed Processing*, MIT Press, 1985. - Zhang K, Sejnowski TJ (2000) A universal scaling law between gray matter and white matter of cerebral cortex. PNAS, 97: 5621–5626.

#### 04 Sep

- Linear neuron models
- Linear time-invariant systems and convolution
- Simulating differential equations
- Carandini M, Heeger D (1994) Summation and division by neurons in primate visual cortex. Science, 264: 1333-1336.

Optional reading for more background:

#### 16 Sep

- Handout on supervised learning in single-stage feedforward networks

#### 18 Sep

- Handout on supervised learning in multi-layer feedforward networks - "backpropagation"
- Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) "Efficient BackProp," in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.).
- NetTalk demo

#### 23 Sep

- Handout: Hebbian learning and PCA
**HKP**Chapter 8**PDP**Chapter 9 (full text of Michael Jordan's tutorial on linear algebra, including section on eigenvectors)

#### 30 Sep

- Foldiak, P. Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 64, 165-170 (1990).
**HKP**Chapter 9- Olshausen BA, Field DJ Emergence of simple-cell receptive field properties by learning a sparse code for natural images, Nature, 381: 607-609. (1996)