VS298 (Fall 06): Suggested projects: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
No edit summary |
No edit summary |
||
Line 7: | Line 7: | ||
* '''Cortical maps.''' The elastic net model of [http://redwood.berkeley.edu/~amir/vs298/durbin-mitchison.pdf Durbin and Mitchison] is typical of many cortical map models in that they learn directly on a parameterized feature space. But the cortex simply gets a bunch of inputs from the LGN, and so it needs to learn features such as orientation at the same time as it organizes them into a feature map. How would you go about learning a feature map for orientation position directly from simulated LGN inputs? (You may wish to consult the book of [http://nn.cs.utexas.edu/computationalmaps/ Risto Miikkulainen] for recent efforts in this area.) | * '''Cortical maps.''' The elastic net model of [http://redwood.berkeley.edu/~amir/vs298/durbin-mitchison.pdf Durbin and Mitchison] is typical of many cortical map models in that they learn directly on a parameterized feature space. But the cortex simply gets a bunch of inputs from the LGN, and so it needs to learn features such as orientation at the same time as it organizes them into a feature map. How would you go about learning a feature map for orientation position directly from simulated LGN inputs? (You may wish to consult the book of [http://nn.cs.utexas.edu/computationalmaps/ Risto Miikkulainen] for recent efforts in this area.) | ||
* '''Feedforward vs. recurrent weights.''' As we discussed in class, one can implement a given input-output mapping in a neural network using just feedforward weights: y = W x, or using just recurrent weights: \tau dy/dt + y = x + M y, or both: \tau dy/dt + y = W x + M y. | * '''Feedforward vs. recurrent weights.''' As we discussed in class, one can implement a given input-output mapping in a neural network using just feedforward weights: <math>y = W\, x</math>, or using just recurrent weights: <math> \tau dy/dt + y = x + M y</math>, or both: <math>\tau dy/dt + y = W x + M y</math>. | ||
* Restricted Boltzmann machines | * Restricted Boltzmann machines |
Revision as of 02:32, 23 October 2006
- NETtalk. Train a multi-layer perceptron to convert text to speech. You can get Sejnowski & Rosenberg's original paper and the data they used here. (You will need a DECtalk speech synthesizer to play the phonemes - you can probably pick up a used one online.)
- Recognition of handwritten digits. Train a MLP to classify handwritten digits 0-9. You can get some training data here. You may wish to follow the convolutional network methodology of Yann LeCun (try the simpler, earlier model), or invent your own method.
- Sparse coding and decorrelation. Implement Peter Foldiak's network and train it on the handwritten digits above to learn the features of this data. You may wish to then try supervised learning on the learned features to see if it has simplified the classification problem.
- Cortical maps. The elastic net model of Durbin and Mitchison is typical of many cortical map models in that they learn directly on a parameterized feature space. But the cortex simply gets a bunch of inputs from the LGN, and so it needs to learn features such as orientation at the same time as it organizes them into a feature map. How would you go about learning a feature map for orientation position directly from simulated LGN inputs? (You may wish to consult the book of Risto Miikkulainen for recent efforts in this area.)
- Feedforward vs. recurrent weights. As we discussed in class, one can implement a given input-output mapping in a neural network using just feedforward weights: <math>y = W\, x</math>, or using just recurrent weights: <math> \tau dy/dt + y = x + M y</math>, or both: <math>\tau dy/dt + y = W x + M y</math>.
- Restricted Boltzmann machines
- Integrate-and-fire model neuron