TCN Paper Ideas: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
Line 20: | Line 20: | ||
* [http://www.ncbi.nlm.nih.gov/pubmed/19346478 Learning reward timing in cortex through reward dependent expression of synaptic plasticity] | * [http://www.ncbi.nlm.nih.gov/pubmed/19346478 Learning reward timing in cortex through reward dependent expression of synaptic plasticity] | ||
* [http://www.cell.com/cell/abstract/S0092-8674%2815%2900973-3 Central Cholinergic Neurons Are Rapidly Recruited by Reinforcement Feedback] | |||
* [http://www.sciencedirect.com/science/article/pii/S0960982215004790 Selective Activation of a Putative Reinforcement Signal Conditions Cued Interval Timing in Primary Visual Cortex] | |||
* [http://www.sciencedirect.com/science/article/pii/S0896627305003624 Uncertainty, Neuromodulation, and Attention] | |||
* [http://www.gatsby.ucl.ac.uk/~dayan/papers/25lessons.pdf Twenty-Five Lessons from Computational Neuromodulation] |
Latest revision as of 20:54, 8 January 2016
Post ideas about interesting papers to read below. I
Spring 2016
Ideas from the Nando Fretas AMA:
- Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1]
- Pointer networks, http://arxiv.org/abs/1506.03134[3]
- Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]
- Learning to see by moving, http://arxiv.org/abs/1505.01596[5]
- Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]
- Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]
- Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]
- Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]
- Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]
- Hippocampal place cells construct reward related sequences through unexplored space
- Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]
Vijay Mohan a post-doc from UNC generously put together this reading list for me on computational models of neuromodulators. Haven't read them all yet, but looks like some good stuff and might be a good way to add some neuroscience to the mix to counterbalance all the deep learning.
- Learning reward timing in cortex through reward dependent expression of synaptic plasticity
- Central Cholinergic Neurons Are Rapidly Recruited by Reinforcement Feedback
- Selective Activation of a Putative Reinforcement Signal Conditions Cued Interval Timing in Primary Visual Cortex
- Uncertainty, Neuromodulation, and Attention
- Twenty-Five Lessons from Computational Neuromodulation