Jascha Sohl-Dickstein: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
No edit summary
No edit summary
Line 1: Line 1:
[[Image:jascha_picture.jpg|150px|text-top|right]] [[Image:jascha_picture.jpg|150px|thumb|right]] I am a graduate student in the [http://redwood.berkeley.edu Redwood Center for Theoretical Neuroscience], at University of California, Berkeley.  I am a member of [https://redwood.berkeley.edu/bruno/ Bruno Olshausen's] lab, and the [http://biophysics.berkeley.edu/ Biophysics Graduate Group].  My email address is [mailto:jascha@berkeley.edu jascha@berkeley.edu].
[[Image:jascha_picture.jpg|150px|text-top|right]] I am a graduate student in the [http://redwood.berkeley.edu Redwood Center for Theoretical Neuroscience], at University of California, Berkeley.  I am a member of [https://redwood.berkeley.edu/bruno/ Bruno Olshausen's] lab, and the [http://biophysics.berkeley.edu/ Biophysics Graduate Group].  My email address is [mailto:jascha@berkeley.edu jascha@berkeley.edu].


I am interested in how we learn to perceive the world.  There is evidence that much of our representation of the world is learned during development rather than being genetically hardwired - everything from the way light intensity is correlated on adjacent patches of the retina all the way up to rules for social interaction.  How this unsupervised learning problem is solved - how we learn the structure inherent in the world by experiencing examples of it - is not well understood.  This is the problem I am interested in tackling.
I am interested in how we learn to perceive the world.  There is evidence that much of our representation of the world is learned during development rather than being genetically hardwired - everything from the way light intensity is correlated on adjacent patches of the retina all the way up to rules for social interaction.  How this unsupervised learning problem is solved - how we learn the structure inherent in the world by experiencing examples of it - is not well understood.  This is the problem I am interested in tackling.

Revision as of 01:47, 9 November 2011

Jascha picture.jpg

I am a graduate student in the Redwood Center for Theoretical Neuroscience, at University of California, Berkeley. I am a member of Bruno Olshausen's lab, and the Biophysics Graduate Group. My email address is jascha@berkeley.edu.

I am interested in how we learn to perceive the world. There is evidence that much of our representation of the world is learned during development rather than being genetically hardwired - everything from the way light intensity is correlated on adjacent patches of the retina all the way up to rules for social interaction. How this unsupervised learning problem is solved - how we learn the structure inherent in the world by experiencing examples of it - is not well understood. This is the problem I am interested in tackling.

Practically - I mostly develop techniques to estimate parameters for highly flexible but intractable probabilistic models, using ideas from statistical mechanics and dynamical systems.

Code

  • MPF - This repository contains Matlab code implementing Minimum Probability Flow learning (MPF) for several cases, specifically:
    • MPF_ising/ - parameter estimation in the Ising model
    • MPF_RBM_compare_log_likelihood/ - parameter estimation in Restricted Boltzmann Machines. This directory also includes code comparing the log likelihood of small RBMs trained via pseudolikelihood and Contrastive Divergence to ones trained via MPF.
  • HAIS - This repository contains Matlab code to perform partition function estimation, log likelihood estimation, and importance weight estimation in models with intractable partition functions and continuous state spaces, using Hamiltonian Annealed Importance Sampling (HAIS). It can also be used for standard Hamiltonian Monte Carlo sampling (single step, with partial momentum refreshment).

Projects

  • Minimum Probability Flow (MPF) - A collaboration with Peter Battaglino and Michael R. DeWeese. MPF is a technique for parameter estimation in un-normalized probabilistic models. It proves to be an order of magnitude faster than competing techniques for the Ising model, and an effective tool for learning parameters for any non-normalizable distribution. See the ICML paper and released code. If you are interested in using MPF in a continuous state space, you should use the method described in the Persistent MPF note.
  • Hamiltonian Annealed Importance Sampling (HAIS) - A collaboration with Jack Culpepper. Allows the estimation of importance weights - and thus partition functions and log likelihoods - for intractable probabilistic models. See the tech report, and the released code.
  • Extensions to Hamiltonian Monte Carlo - See the note below on modifying the rejection rules to less frequently negate the momentum, increasing mixing speed. Additionally, ongoing work maintains an online low rank approximation to the inverse Hessian by the introduction of auxiliary Gaussian distributed variables with the Hessian as their coupling matrix.
  • Lie group models for transformations in natural video - A collaboration with Jimmy Wang and Bruno Olshausen. We train first order differential operators on inter-frame differences in natural video, in order to learn a set of natural transformations. We further explore the use of these transformations in video compression. See the tech report, and the DCC paper.
  • A comparison of the log likelihoods of popular image models. - A collaboration with Jack Culpepper and Charles Cadieu. We use Hamiltonian Annealed Importance Sampling (HAIS - above) to compare the log likelihoods of popular image models trained via several parameter estimation techniques.
  • Bilinear generative models for natural images - A collaboration with Jack Culpepper and Bruno Olshausen. See the soon to appear ICCV paper.
  • A device for human echolocation - A collaboration with Nicol Harper and Chris Rodgers. (see stylish picture to right)
    Nicol bat.jpg
  • Statistical analysis of medical images of cancer patients - A collaboration with Joel Zylberberg and Michael DeWeese. (See also an earlier project training statistical models on MRI and CT breast images - SPIE publication.)
  • Hessian-aware online optimization - By rewriting the inverse Hessian in terms of its Taylor expansion, and then accumulating terms in this expansion in an online fashion, neat things can be done...

Notes

  • Hamiltonian Monte Carlo with Fewer Momentum Reversals - Reduces the number of momentum reversals required in Hamiltonian Monte Carlo. This is accomplished by maintaining the net exchange of probability between states with opposite momenta, but reducing the rate of exchange in both directions such that it is 0 in one direction.
  • Persistent Minimum Probability Flow - Develops MPF in the case that non-data states are captured by persistent samples from the current estimate of the model distribution. Analogous to Persistent CD. This technique should be used for MPF in continuous state spaces.
  • Entropy of Generic Distributions - Calculates the entropy that can be expected for a distribution drawn at random from the simplex of all possible distributions (John Schulman points out that ET Jaynes deals with similar questions in chapter 11 of "Probability Theory: The Logic Of Science")
  • On the independence of linear contributions to an energy function - Even in the overcomplete case where there are more experts than data dimensions, product-of-experts style models tend to learn decorrelated features. This note provides motivation for this by Taylor expanding the KL divergence, and observing that there are terms in the expansion which specifically penalize similarity between the experts.

The following are titles for informal notes I intend to write, but haven't gotten to/finished yet. If any of the following sound interesting to you, pester me and they will appear more quickly.

  • Natural gradients explained via an analogy to signal whitening
  • A log bound on the growth of intelligence with system size
  • The field of experts model learns Gabor-like receptive fields when trained via minimum probability flow or score matching
  • For small time bins, generalized linear models and causal Boltzmann machines become equivalent
  • How to construct phase space volume preserving recurrent networks
  • Maximum likelihood learning as constraint satisfaction
  • A spatial derivation of score matching

Publications

J Sohl-Dickstein, P Battaglino, M DeWeese. New method for parameter estimation in probabilistic models: Minimum probability flow. Accepted, Physical Review Letters (2011).

J Sohl-Dickstein, P Battaglino, M DeWeese. Minimum probability flow learning. "Distinguished Paper" ICML (2011) http://redwood.berkeley.edu/jascha/pdfs/icml.pdf with supplementary material http://redwood.berkeley.edu/jascha/pdfs/supplementary_material_icml.pdf (also see the Persistent MPF note for more on learning in continuous state spaces)

A Hayes, J Grotzinger, L Edgar, SW Squyres, W Watters, J Sohl-Dickstein. Reconstruction of Eolian Bed Forms and Paleocurrents from Cross-Bedded Strata at Victoria Crater, Meridiani Planum, Mars, Journal of Geophysical Research (2011) http://www.agu.org/pubs/crossref/2011/2010JE003688.shtml

CM Wang, J Sohl-Dickstein, I Tosik. Lie Group Transformation Models for Predictive Video Coding. Proceedings of the Data Compression Conference (2011) http://redwood.berkeley.edu/jascha/pdfs/PID1615931.pdf

BJ Culpepper, J Sohl-Dickstein, B Olshausen. Building a better probabilistic model of images by factorization. Accepted, ICCV. (2011)

J Sohl-Dickstein, BJ Culpepper. Hamiltonian annealed importance sampling for partition function estimation. Redwood Technical Report. (2011) http://redwood.berkeley.edu/jascha/pdfs/HAIS.pdf

J Sohl-Dickstein, CM Wang, BA Olshausen. An Unsupervised Algorithm For Learning Lie Group Transformations. Redwood Technical Report (2009) http://arxiv.org/abs/1001.1027

C Abbey, J Sohl-Dickstein, BA Olshausen. Higher-order scene statistics of breast images. Proceedings of SPIE (2009) http://link.aip.org/link/?PSISDG/7263/726317/1

K Kinch, J Sohl-Dickstein, J Bell III, JR Johnson, W Goetz, GA Landis. Dust deposition on the Mars Exploration Rover Panoramic Camera (Pancam) calibration targets. Journal of Geophysical Research-Planets (2007) http://www.agu.org/pubs/crossref/2007/2006JE002807.shtml

POSTER - J Sohl-Dickstein, BA Olshausen. Learning in energy based models via score matching. Cosyne (2007) - this (dense!) poster introduces a spatial derivation of score matching, applies it to learning in a Field of Experts model, and then extends Field of Experts to work with heterogeneous experts (to form a "tapestry of experts"). I'm including it as it hasn't been written up elsewhere. download poster

JR Johnson, J Sohl-Dickstein, WM Grundy, RE Arvidson, J Bell III, P Christensen, T Graff, EA Guinness, K Kinch, R Morris, MK Shepard. Radiative transfer modeling of dust-coated Pancam calibration target materials: Laboratory visible/near-infrared spectrogoniometry. Journal of Geophysical Research (2006) http://www.agu.org/pubs/crossref/2006/2005JE002658.shtml

J Bell III, J Joseph, J Sohl-Dickstein, H Arneson, M Johnson, M Lemmon, D Savransky In-flight calibration and performance of the Mars Exploration Rover Panoramic Camera (Pancam) instruments. Journal of Geophysical Research (2006) http://www.agu.org/pubs/crossref/2006/2005JE002444.shtml

Parker et al. Stratigraphy and sedimentology of a dry to wet eolian depositional system, Burns formation, Meridiani Planum, Mars. Earth and Planetary Science Letters (2005)

Soderblom et al. Pancam multispectral imaging results from the Opportunity rover at Meridiani Planum. Science (2004) http://www.sciencemag.org/content/306/5702/1703

Soderblom et al. Pancam multispectral imaging results from the Spirit rover at Gusev crater. Science (2004) http://www.sciencemag.org/content/305/5685/800

Smith et al. Athena microscopic imager investigation. Journal of Geophysical Research-Planets (2003)

Bell et al. Hubble Space Telescope Imaging and Spectroscopy of Mars During 2001. American Geophysical Union (2001)