Difference between revisions of "VS265: Reading Fall2012"

From RedwoodCenter
Jump to navigationJump to search
Line 96: Line 96:
 
* [http://redwood.berkeley.edu/vs265/bayes-prob.pdf Bayesian probability theory and generative models]
 
* [http://redwood.berkeley.edu/vs265/bayes-prob.pdf Bayesian probability theory and generative models]
 
* [http://redwood.berkeley.edu/vs265/mog.pdf Mixture of Gaussians model ]
 
* [http://redwood.berkeley.edu/vs265/mog.pdf Mixture of Gaussians model ]
* [http://redwood.berkeley.edu/vs265/loredo-laplace-supernova.pdf Loredo paper]
+
* T.J. Loredo, [http://redwood.berkeley.edu/vs265/loredo-laplace-supernova.pdf From Laplace to supernova SN1987A:  Bayesian inference in astrophysics]
  
 
==== 14 Nov ====
 
==== 14 Nov ====

Revision as of 06:36, 16 November 2012

27 Aug

29 Aug

Optional:

05 Sep

17 Sep

  • Handout on supervised learning in multi-layer feedforward networks - "backpropagation"
  • Y. LeCun, L. Bottou, G. Orr, and K. Muller (1998) "Efficient BackProp," in Neural Networks: Tricks of the trade, (G. Orr and Muller K., eds.).
  • NetTalk demo

24 Sep

Optional:

8 Oct

Optional readings:

15 Oct

Here are some additional links to papers mentioned in lecture. Optional reading:

- Gary Blasdel, Orientation selectivity, preference, and continuity in monkey striate cortex., J Neurosci, 1992. Another source of many of nice images are in the galleries on Amiram Grinvald's site: [1]

- From Clay Reid's lab, Functional imaging with cellular resolution reveals precise micro-architecture in visual cortex. Make sure you look at the supplementary material and videos on their web site (seems partly broken) [2].

22 Oct

Additional reading:

24 Oct

29 Oct

Chris Hillar guest lecture:

5 Nov

14 Nov

  • HKP Chapter 7, section 7.1 (Boltzmann machines)

Application to neural data analysis: