TCN Paper Ideas
From RedwoodCenter
Jump to navigationJump to search
Post ideas about interesting papers to read below. I
Spring 2016
Ideas from the Nando Fretas AMA:
- Teaching machines to read and comprehend, http://arxiv.org/abs/1506.03340[1]
- Pointer networks, http://arxiv.org/abs/1506.03134[3]
- Neural GPUs learn algorithms, http://arxiv.org/abs/1511.08228[4]
- Learning to see by moving, http://arxiv.org/abs/1505.01596[5]
- Unitary evolution recurrent neural networks http://arxiv.org/abs/1511.06464[6]
- Action-Conditional Video Prediction using Deep Networks in Atari Games, http://arxiv.org/abs/1507.08750[7]
- Deep Reinforcement Learning with Double Q-learning, http://arxiv.org/abs/1509.06461[8]
- Towards Trainable Media: Using Waves for Neural Network-Style Training, http://arxiv.org/abs/1510.03776[9]
- Weakly-supervised Disentangling with Recurrent Transformations for 3D View Synthesis, http://www-personal.umich.edu/~reedscot/nips15_rotator_final.pdf[10]
- Hippocampal place cells construct reward related sequences through unexplored space
- Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images, http://arxiv.org/abs/1506.07365[11]
Vijay Mohan a post-doc from UNC generously put together this reading list for me on computational models of neuromodulators. Haven't read them all yet, but looks like some good stuff and might be a good way to add some neuroscience to the mix to counterbalance all the deep learning.
- Learning reward timing in cortex through reward dependent expression of synaptic plasticity
- Central Cholinergic Neurons Are Rapidly Recruited by Reinforcement Feedback
- Selective Activation of a Putative Reinforcement Signal Conditions Cued Interval Timing in Primary Visual Cortex
- Uncertainty, Neuromodulation, and Attention
- Twenty-Five Lessons from Computational Neuromodulation