Seminars: Difference between revisions

From RedwoodCenter
Jump to navigationJump to search
 
(884 intermediate revisions by 28 users not shown)
Line 1: Line 1:
== Instructions ==
== Instructions ==


# Check the internal calendar (here) for a free seminar slot. If a seminar is not already booked at the regular time of noon on Wednesday, you can reserve it.
DON'T POST YOUR SEMINARS HERE!!!!! USE THE NEW WEBSITE: redwood.berkeley.edu/wp-admin. The seminar schedule and scheduling instructions are under Internal Resources.
# Make a note on this page in the [[#Tentative_Speakers|Tentative Speakers]] section that you are going to invite a speaker. Please include your name and email as ''host''  in case somebody wants to contact you.
# Invite a speaker.
# As soon as the speaker confirms, move the information to the [[#Confirmed_Speakers|Confirmed Speakers]] section.
# Put the date into the internal calendar
# Notify Jimmy [mailto:cmwang@berkeley.edu] that we have a confirmed speaker so that he can update the public web page. Please include a title and abstract.
# Notify Sharyn mailto:climons@berkeley.edu about the seminar date so she knows to send out an announcement.


== Tentative Speakers ==
# Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker.  However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance.  But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.
# Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary).  Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as ''host''  in case somebody wants to contact you.
# Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [mailto:baolshausen@berkeley.edu] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.
# Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie  [mailto:nrterranova@berkeley.edu] as well to give her a heads up so she knows to send out an announcement in time.
# If the speaker needs accommodations you should contact Natalie [mailto:nrterranova@berkeley.edu] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.
# During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment).  Save receipts for any meals you paid for.
# After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar.  Natalie will then process the reimbursement.  She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.


== Tentative / Confirmed Speakers ==


'''13 Feb 2008'''
'''January 31 2018'''
* Speaker: Marcelo Magnasco
* Speaker: Joel Makin
* Affiliation: Rockefeller University
* Time: 12:00
* Host: Kilian
* Affiliation: UCSF
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
 
'''February 6, 2018'''
* Speaker: Leenoy Mesulam
* Time: 12:00
* Affiliation: Princeton University
* Host: Fritz
* Status: confirmed
* Title: The 1000+ neurons challenge: emergent simplicity in (very) large populations
* Abstract: Recent technological progress has dramatically increased our access to the neural activity underlying memory-related tasks. These complex high-dimensional data call for theories that allow us to identify signatures of collective activity in the networks that are crucial for the emergence of cognitive functions. As an example, we study the neural activity in dorsal hippocampus as a mouse runs along a virtual linear track. One of the dominant features of this data is the activity of place cells, which fire when the animal visits particular locations. During the first stage of our work we used a maximum entropy framework to characterize the probability distribution of the joint activity patterns observed across ensembles of up to 100 cells. These models, which are equivalent to Ising models with competing interactions, make surprisingly accurate predictions for the activity of individual neurons given the state of the rest of the network, and this is true both for place cells and for non-place cells.  For the second stage of our work we study networks of ~ 1500 neurons. To address this much larger system, we use different coarse graining methods, in the spirit of the renormalization group, to uncover macroscopic features the network.  We see hints of scaling and of behavior that is controlled by a non-trivial fixed point. Perhaps, then, there is emergent simplicity even in these very complex systems of real neurons in the brain.
 
 
'''!!! NOTE:  going forward for spring term 2018, please avoid Wednesday for scheduling redwood seminars as we have the Simons brain and computation seminars that morning, so it makes for packed day to have both !!!'''
 
 
'''February 21, 2018'''
* Speaker: Tianshi Wang
* Time: 12:00
* Affiliation: Berkeley
* Host: Bruno
* Status: tentative
* Title:
* Abstract:
 
'''April 2, 2018'''
* Speaker: Pascal Fries
* Time: 12:00
* Affiliation: Berkeley
* Host: Bruno/Dana Ballard
* Status: tentative
* Title:
* Abstract:
 
'''September 12, 2018'''
* Speaker: Wujie Zhang
* Time: 12:00
* Affiliation: Yartsev lab, Berkeley
* Host: Guy
* Status: Confirmed
* Title:
* Title:
* Abstract:
* Abstract:


'''September 17, 2018'''
* Speaker: Juergen Jost
* Time: 12:00
* Affiliation: MPI for Mathematics in the Sciences, Leipzig
* Host: Fritz
* Status: confirmed
* Title:
* Abstract:
'''TBD, sometime in the Fall'''
* Speaker: Evangelos Theodorou
* Time: TBD
* Affiliation: GeorgiaTech
* Host: Mike/Dibyendu Mandal
* Status: planning
* Title: TBD
* Abstract: TBD
'''TBD, 2016'''
* Speaker: Alexander Stubbs
* Time: 12:00
* Affiliation: UC Berkeley
* Host: Bruno/Michael Levy
* Status: tentative
* Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?
* Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.
== Previous Seminars ==
=== 2017/18 academic year ===
'''July 10, 2017'''
* Speaker: David Field
* Time: 6:00pm
* Affiliation: Cornell
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
'''July 18, 2017'''
* Speaker: Jordi Puigbò
* Time: 12:30
* Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)
* Host: Vasha
* Status: Confirmed
* Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning
* Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.
'''Aug. 14, 2017'''
* Speaker: Brent Doiron
* Time: 12:00
* Affiliation:
* Host: Bruno/Hillel
* Status: tentative
* Title:
* Abstract:
'''Aug. 15, 2017'''
* Speaker: Ken Miller
* Time: 12:00
* Affiliation: Columbia
* Host: Bruno/Hillel
* Status: confirmed
* Title:
* Abstract:
'''Aug. 16, 2017'''
* Speaker: Joshua Vogelstein
* Time: 12:00
* Affiliation: JHU
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
'''Sept. 6, 2017'''
* Speaker: Gerald Friedland
* Time: 12:00
* Affiliation:  UC Berkeley
* Host: Bruno/Jerry
* Status: confirmed
* Title: A Capacity Scaling Law for Artificial Neural Networks
* Abstract:
'''Sept. 20, 2017'''
* Speaker: Carl Pabo
* Time: 12:00
* Affiliation:
* Host: Bruno
* Status: confirmed
* Title: Human Thought and the Human Future
* Abstract:
'''Oct. 11, 2017'''
* Speaker: Deepak Pathak and Pulkit Agrawal
* Time: 12:30 PM
* Affiliation: UC Berkeley, BAIR
* Host: Mayur Mudigonda
* Status: Confirmed
* Title: Curiosity and Rewards
* Abstract:
'''October 25th 2017'''
* Speaker: Caleb Kalmere
* Time: 12:00
* Affiliation: Rice
* Host: Guy Isely
* Status: Confirmed
* Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity
* Abstract: TBD-- HMM-based hippocampal replay
'''Nov. 8, 2017'''
* Speaker: John Harte
* Time: 12:00
* Affiliation:  UC Berkeley
* Host: Bruno
* Status: confirmed
* Title: Maximum Entropy and the Inference of Patterns in Nature
* Abstract:
'''Nov. 16, 2017'''
* Speaker: Jeff Hawkins
* Time: 12:00
* Affiliation:  Numenta
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
'''November 29th 2017'''
* Speaker: Joel Kaardal
* Time: 12:00
* Affiliation: Salk
* Host: Bruno/Frederic Theunissen
* Status: Confirmed
* Title:
* Abstract:
'''December 13, 2017'''
* Speaker: Zhaoping Li
* Time: 12:00
* Affiliation: UCL
* Host: Bruno/Frederic Theunissen
* Status: confirmed
* Title:
* Abstract:


'''24-28 March 2008 - spring break'''
'''December 19, 2017'''
* Speaker: Shaowei Lin
* Time: 12:00
* Affiliation:
* Host: Chris Hillar
* Status: confirmed
* Title: Biologically plausible deep learning for recurrent spiking neural networks.
* Abstract: Despite widespread success in deep learning, backpropagation has been criticized for its biological implausibility. To address this issue, Hinton and Bengio have suggested that our brains are performing approximations of backpropagation, and some of their proposed models seem promising. In the same vein, we propose a different model for learning in recurrent neural networks (RNNs), known as McCulloch-Pitts processes. As opposed to traditional models for RNNs (such as LSTMs) which are based on continuous-valued neurons operating in discrete time, our model consists of discrete-valued (spiking) neurons operating in continuous time. Through our model, we are able to derive extremely simple and local learning rules, which directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).


'''Jan. 24, 2018'''
* Speaker: Miguel Gredilla
* Time: 12:00
* Affiliation:  Vicarious
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:


'''9 Apr 2008'''
=== 2016/17 academic year ===
* Speaker: Thanos Siapas
 
* Affiliation: Caltech
'''Sept. 7, 2016'''
* Host: Amir
* Speaker: Dan Stowell
* Title: TBD
* Time: 12:00
* Abstract: TBD
* Affiliation: Queen Mary, University of London
* Host: Frederic Theunissen
* Status: confirmed
* Title:  
* Abstract:
 
'''Sept. 8, 2016'''
* Speaker: Barb Finlay
* Time: 12:00
* Affiliation: Cornell Univ
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
 
'''Sept. 27, 2016'''
* Speaker: Yoshua Bengio
* Time: 11:00
* Affiliation: Univ Montreal
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
 
'''Oct. 12, 2016'''
* Speaker: Paul Rhodes
* Time: 4:00
* Affiliation: Specific Technologies
* Host: Dylan/Bruno
* Status: confirmed
* Title: A novel and important problem in spatiotemporal pattern classification
* Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth.  The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance).  We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains.  So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.
 
'''Oct. 25, 2016'''
* Speaker: Douglas L. Jones
* Time: 2:00
* Affiliation: ECE Department, University of Illinois at Urbana-Champaign
* Host: Bruno
* Status: confirmed
* Title: Optimal energy-efficient coding in sensory neurons
* Abstract:  Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e.,  maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.


== Confirmed Speakers ==
'''October 26, 2016'''
* Speaker: Eric Jonas
* Time: 12:00
* Affiliation: UC Berkeley
* Host: Charles Frye
* Status: confirmed
* Title: Could a neuroscientist understand a microprocessor?
* Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors.  Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information.  We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor.  This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally.
* Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).


'''Nov. 9, 2016'''
* Speaker: Pulkit Agrawal
* Time: 12:00
* Affiliation: EECS, UC Berkeley
* Host: Bruno
* Status: confirmed
* Title:
* Abstract: 


'''30 Jan 2008'''
'''Nov. 16, 2016'''
* Speaker: Kai Miller
* Speaker: Sebastian Musslick
* Affiliation: University of Washington
* Time: 12:00
* Host: Kilian
* Affiliation: Princeton Neuroscience Institute (Princeton University)
* Title: Changes in local cortical activity are revealed by a power law in the cortical potential spectrum
* Host: Brian Cheung
* Abstract: I will begin by demonstrating how careful experimental technique
* Status: confirmed
reveals a power law of the form P~Af^-chi  in the electrocortical
* Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures
potential spectrum with exponent \chi=4.0 \pm 0.1 above ~70Hz, and
* Abstract:  One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect:  the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.
evidence for a power law with \chi_{low}=2.0 \pm 0.4 below this.
During a simple finger flexion task, the potential spectrum is
effectively decoupled into this power law and the \alpha and \beta
rhythms.  I will demonstrate that increases in the coefficient, A, of
this power law (not the exponent) correspond to local cortical
function and reveal discrete finger somatotopy. Finally, I will
discuss some possible interpretations for the source and nature of
these changes.


'''Nov 30, 2016'''
* Speaker: Marcus Rohrbach
* Time: 12:00
* Affiliation: EECS, UC Berkeley
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:


'''March 1st, 2017'''
* Speaker: Sahar Akram
* Time: 12:00
* Affiliation: Starkey Hearing Research Center
* Host: Shariq
* Status: Confirmed
* Title: Real-Time & Adaptive Auditory Neural Processing
* Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.


'''6 Feb 2008'''
'''Mar 2, 2017'''
* Speaker: Pam Reinagel
* Speaker: Joszef Fiser
* Affiliation: UCSD
* Time: 12:00
* Host: Fritz
* Affiliation:
* Title:
* Host: Bruno
* Status: confirmed
* Title:  
* Abstract:
* Abstract:


'''Mar 22, 2017'''
* Speaker: Michael Frank
* Time: 12:00
* Affiliation: Magicore Systems
* Host: Dylan
* Status:  Confirmed
* Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture  and Optimization for Neural Networks
* Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.


'''20 Feb 2008'''
'''April 12, 2017'''
* Speaker: Costa Colbert
* Speaker: Aapo Hyvarinen
* Affiliation: Evolved Machines
* Time: 12:00
* Affiliation: Gatsby/UCL
* Host: Bruno
* Host: Bruno
* Title:
* Status: confirmed
* Title:  
* Abstract:
* Abstract:


'''May 24, 2017'''
* Speaker: Pierre Sermanet
* Time: 12:00
* Affiliation: Google Brain
* Host: Brian
* Status: confirmed
* Title:
* Abstract:


'''26/27 Feb 2008'''
'''May 30, 2017'''
* Speaker: Jean-Philippe Lachaux
* Speaker: Heiko Schutt
* Affiliation: INSERM, Lyon
* Time: 12:00
* Host: Tim
* Affiliation: Univ Tubingen
* Title:
* Host: Bruno
* Status: confirmed
* Title:  
* Abstract:
* Abstract:


'''June 7, 2017'''
* Speaker: Saurabh Gupta
* Time: 12:00
* Affiliation: UC Berkeley
* Host: Spencer
* Status: confirmed
* Title: Cognitive Mapping and Planning for Visual Navigation
* Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.


'''6 Mar 2008'''
'''June 14, 2017'''
* Speaker: Peter Robinson
* Speaker: Madhow
* Affiliation: University of Sydney
* Time: 12:00
* Host: Tim
* Affiliation: UCSB
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
 
'''June 19, 2017'''
* Speaker: Tali Tishby
* Time: 12:00
* Affiliation: Hebrew Univ.
* Host: Bruno/Daniel Reichman
* Status: confirmed
* Title:  
* Title:  
* Abstract:
* Abstract:


'''June 21, 2017'''
* Speaker: Jasmine Collins
* Time: 12:00
* Affiliation: Google
* Host: Brian
* Status: confirmed
* Title: Capacity and Trainability in Recurrent Neural Networks
* Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.
=== 2015/16 academic year ===
'''July 21, 2015'''
* Speaker: Felix Effenberger
* Affiliation:
* Host: Chris H.
* Status: confirmed
* Title:
* Abstract
'''July 22, 2015'''
* Speaker: Lav Varshney
* Affiliation: Urbana-Champaign
* Host: Bruno
* Status: Confirmed
* Title:
* Abstract
'''July 23, 2015'''
* Speaker: Xuemin Wei
* Affiliation: Univ Penn
* Host: Bruno
* Status: Confirmed
* Title:
* Abstract
'''July 29, 2015'''
* Speaker: Gonzalo Otazu
* Affiliation: Cold Spring Harbor Laboratory, Long Island, NY
* Host: Mike D
* Status: Confirmed
* Title: The Role of Cortical Feedback in Olfactory Processing
* Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.


'''19 March 2008'''
'''Aug 19, 2015'''
* Speaker: Dana Ballard
* Speaker: Wujie Zhang
* Affiliation: University of Texas, Austin
* Affiliation: Columbia
* Host: Fritz
* Host: Bruno/Michael Yartsev
* Title:
* Status: Confirmed
* Title:  
* Abstract:
* Abstract:


'''Sept 2, 2015'''
* Speaker: Jeremy Maitin-Shepard
* Affiliation: Computer Science, UC Berkeley
* Host: Bruno
* Status: confirmed
* Title: Combinatorial Energy Learning for Image Segmentation
* Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes.  Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.


'''26 March 2008'''
'''Sept 8, 2015'''
* Speaker: Nick Priebe
* Speaker: Jennifer Hasler
* Affiliation: University of Texas at Austin
* Affiliation: Georgia Tech
* Host: Mike
* Host: Bruno/Mika
* Title: TBA
* Status: confirmed
* Title:  
* Abstract:
* Abstract:


== Previous Seminars ==
'''October 29, 2015'''
* Speaker: Garrett Kenyon
* Affiliation: Los Alamos National Laboratory
* Host: Dylan
* Status: confirmed
* Title: A Deconvolutional Competitive Algorithm (DCA)
* Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons.  LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning.  Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error,  we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA).  All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.
 
'''Nov 18, 2015'''
* Speaker: Hillel Adesnik
* Affiliation: Berkeley
* Host: Bruno
* Status: confirmed
* Title:
 
'''Nov 17, 2015'''
* Speaker: Manuel Lopez
* Affiliation:
* Host: Fritz
* Status: confirmed
* Title:
* Abstract
 
'''Dec 2, 2015'''
* Speaker: Steven Brumby
* Affiliation: [http://www.descarteslabs.com/ Descartes Labs]
* Host: Dylan
* Status: confirmed
* Title: Seeing the Earth in the Cloud
* Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning.
 
'''Dec 14, 2015'''
* Speaker: Bill Softky
* Affiliation:
* Host: Bruno
* Status: confirmed
* Title: Screen addition -  informal Redwood group seminar
 
'''Dec 16, 2015'''
* Speaker: Mike Landy
* Affiliation: Berkeley
* Host: Bruno
* Status: confirmed
* Title:
 
'''Feb 3, 2016'''
* Speaker: Ping-Chen Huang
* Affiliation: Berkeley
* Host: Bruno
* Status: confirmed
* Title:
 
'''Feb 17, 2016'''
* Speaker: Andrew Saxe
* Affiliation: Harvard
* Host: Jesse
* Status: confirmed
* Title: Hallmarks of Deep Learning in the Brain


=== 2007/2008 academic year ===
'''Feb 24, 2016'''
* Speaker: Miguel Perpinan
* Affiliation: UC Merced
* Host: Bruno
* Status: confirmed
* Title: TBA


'''Nov. 27'''
'''Mar 1, 2016'''
* Speaker: Geoff Hinton
* Speaker: Leon Gatys
* Affiliation: Dept. of Computer Science, University of Toronto
* Affiliation: Univ Tubingen
* Host: Bruno
* Host: Bruno
* Title: How are error derivatives represented in the brain
* Status: confirmed
* Abstract: Neurons need to represent both the presence of a feature in the
* Title:
sensory input and the derivative of an error function with repect to
the neural activity. I will describe a simple way in which they can
'''Mar 7-9, 2016'''
represent both of these very different quantities at the same time and
* NICE workshop
show that this representational scheme would make it easy for real
 
neurons to backpropagate error derivatives so that higher level
'''Mar 9, 2016'''
feature detectors can fine-tune the receptive fields of lower level
* Tatiana Engel - HWNI job talk at 12:00
ones.


'''Nov. 13'''
'''Mar 16, 2016'''
* Speaker: Sonja Gruen
* Talia Lerner - HWNI job talk at 12:00
* Affiliation: Riken
 
'''Mar 23, 2016'''
* Speaker: Kwabena Boahen
* Affiliation: Stanford
* Host: Max Kanwal/Bruno
* Status: confirmed
* Title:
 
'''April 11, 2016'''
* Speaker: Hao Su
* Time: at 12:00
* Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University
* Host: Yubei
* Status: confirmed
* Title: [Tentative] Joint Analysis for 2D Images and 3D shapes
* Abstract: Coming
 
'''May 04, 2016'''
* Speaker: Zhengya Zhang
* Time: 12:00
* Affiliation: Electrical Engineering and Computer Science, University of Michigan
* Host: Dylan, Bruno
* Status: Confirmed
* Title: Sparse Coding ASIC Chips for Feature Extraction and Classification
* Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.
 
'''May 18, 2016'''
* Speaker: Melanie Mitchell
* Affiliation: Portland State University and Santa Fe Institute
* Host: Dylan
* Time: 12:00
* Status: confirmed
* Title: Using Analogy to Recognize Visual Situations
* Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition.  In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making. 
* Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan.  Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies.  She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.
 
'''June 8, 2016'''
* Speaker: Kris Bouchard
* Time: 12:00
* Affiliation: LBNL
* Host: Fritz
* Host: Fritz
* Title: Spike synchrony and spike-LFP relation in freely viewing monkeys
* Status: Confirmed
* Title: The union of intersections method
* Abstract:
* Abstract:


'''Oct. 31'''
'''June 15, 2016'''
* Speaker: Jason Kerr
* Speaker: James Blackmon
* Affiliation: Max Planck Institute for Biological Cybernetics
* Time: 12:00
* Host: Tim
* Affiliation: San Francisco State University
* Host: Bruno
* Status: Confirmed
* Title:
* Abstract:
 
=== 2014/15 academic year ===
 
'''2 July 2014'''
* Speaker: Kelly Clancy
* Affiliation: Feldman lab
* Host: Guy
* Status: confirmed
* Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices
* Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.
 
'''23 July 2014'''
* Speaker: Gautam Agarwal
* Affiliation: UC Berkeley/Champalimaud
* Host: Friedrich Sommer
* Status: confirmed
* Title: Unsolved Mysteries of Hippocampal Dynamics
* Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.
 
'''6 Aug 2014'''
* Speaker: Georg Martius
* Affiliation: Max Planck Institute, Leipzig
* Host: Fritz Sommer
* Status: confirmed
* Title: Information driven self-organization of robotic behavior
* Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and
artificial systems is seen in the ability for independent
exploration. In animals and humans, the ability to modify its own
pattern of activity is not only an indispensable trait for adaptation
and survival in new situations, it also provides a learning system
with novel information for improving its cognitive capabilities, and
it is essential for development. Efficient exploration in
high-dimensional spaces is a major challenge in building learning
systems. We propose to implement the exploration as a deterministic
law derived from maximizing an information quantity. More
specifically we use the predictive information of the sensor process
(of a robot) to obtain an update rule (exploration dynamics) of the
controller parameters. To be adequate in robotics application the
non-stationary nature of the underlying time-series have to be taken
into account, which we do by proposing the time-local predictive
information (TiPI). Importantly the exploration dynamics is derived
analytically and by this we link information theory and dynamical
systems. Without a random component the change in the parameters is
deterministically given as a function of the states in a certain time
window. For an embodied system this means in particular that
constraints, responses and current knowledge of the dynamical
interaction with the environment can directly be used to advance
further exploration. Randomness is replaced with spontaneity which we
demonstrate to restrict the search space automatically to the
physically relevant dimensions. Its effectiveness will be
presented with various experiments on high-dimensional robotic system
and we argue that this is a promising way to avoid the curse of
dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.
 
'''15 Aug 2014'''
* Speaker: Juergen Schmidhuber
* Affiliation: IDSIA, Switzerland
* Host: James/Shariq
* Status: confirmed
* Title: TBA
* Abstract: TBA
 
'''2 Sept 2014'''
* Speaker: Oriol Vinyals
* Affliciation: Google
* Host: Guy
* Status: confirmed
* Title: Machine Translation with Long-Short Term Memory Models
* Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering.  Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence.  We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3.  When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.
 
'''19 Sept 2014'''
* Speaker: Gary Marcus
* Affiliation: NYU
* Host: Bruno
* Status: confirmed
* Title: TBA
* Title: TBA
* Abstract: TBA


'''Oct. 29'''
'''24 Sept 2014'''
* Speaker: Laurenz Wiskott
* Speaker: Alyosha Efros
* Affiliation: Bernstein Center for Computational Neuroscience and Institute for Theoretical Biology, Humboldt-University Berlin
* Affiliation: UC Berkeley
* Host: Bruno
* Host: Bruno
* Title: Slow feature analysis for modeling place cells in the hippocampus and its relationship to spike timing dependent plasticity
* Status: confirmed
* Abstract: Slow Feature Analysis (SFA) is an algorithm for extracting slowly varying
* Title: TBA
features from a quickly varying signal.  We have applied SFA to the
* Abstract:
learning of complex cell receptive fields, visual invariances for whole
objects, and place cells in the hippocampus.  Here I will report about our
results on modeling place cells in the hippocampus. 
If slowness is indeed an important learning principle in visual cortex and
beyond, the question arises, how it could be implemented in a biologically
plausible learning rule.  In the second part of the talk I will show
analytically that for linear Poisson units, SFA can be implemented with
STDP with the standard learning window as measured by, e.g., Bi and Poo
(1998).


'''Oct. 23'''
'''30 Sep 2014'''
* Speaker: Liam Paninski
* Speaker: Alejandro Bujan
* Affiliation: Columbia Univesrity
* Affiliation:
* Host: Amir
* Host: Fritz
* Title: Combining biophysical and statistical methods for understanding neural codes
* Status: confirmed
* Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations
* Abstract:  
* Abstract:  
The neural coding problem --- deciding which stimuli will cause a
given neuron to spike, and with what probability --- is a fundamental
question in systems neuroscience.  The high dimensionality of both
stimuli and spike trains has spurred the development of a number of
sophisticated statistical techniques for learning the neural code from
finite experimental data.  In particular, modeling approaches based on
maximum likelihood have proven to be flexible and powerful.


We present three such applications here.  One common thread is that
'''8 Oct 2014'''
the models we have chosen for these data each have concave
* Speaker: Siyu Zhang
loglikelihood surfaces, permitting tractable fitting (by maximizing
* Affiliation: UC Berkeley
the loglikelihood) even in high dimensional parameter spaces, since no
* Host: Karl
local maxima can exist for the optimizer to get `stuck' in.
* Status: confirmed
* Title: Long-range and local circuits for top-down modulation of visual cortical processing
* Abstract:


First we describe neural encoding models in which a linear stimulus
'''15 Oct 2014'''
filtering stage is followed by a noisy integrate-and-fire spike
* Speaker: Tamara Broderick
generation mechanism incorporating after-spike currents and
* Affiliation: UC Berkeley
spike-dependent conductance modulations.  This model provides a
* Host: Yvonne/James
biophysically more realistic alternative to models based on Poisson
* Status: confirmed
(memoryless) spike generation, and can effectively reproduce a variety
* Title: Feature allocations, probability functions, and paintboxes
of spiking behaviors. We use this model to analyze extracellular
* Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.
data from populations of retinal ganglion cells, simultaneously
recorded during stimulation with dynamic light stimuli. Here the
model provides insight into the biophysical factors underlying the
reliability of these neurons' spiking responses, and provides a
framework for analyzing the cross-correlations observed between these
cells. (Joint work with E.J. Chichilnisky, J. Pillow, J. Shlens,
E. Simoncelli, and V. Uzzell, at NYU and Salk.)


Next we describe how to use this model to ``decode'' the underlying
'''29 Oct 2014'''
subthreshold somatic voltage dynamics, given only the superthreshold
* Speaker: Ken Nakayama
spike train.  We also point out some connections to spike-triggered
* Affiliation: Harvard
averaging techniques.
* Host: Bruno
* Status: Confirmed
* Title: Topics in higher level visuo-motor control
* Abstract: TBA


We close by discussing recent extensions to highly
'''5 Nov 2014''' - **BVLC retreat**
biophysically-detailed, conductance-based models, which have the
potential to allow us to estimate the density of active channels in a
cell's membrane and also to decode the synaptic input to the cell as a
function of time.  (With M. Ahrens, Q. Huys, and J. Vogelstein, at
Gatsby and Johns Hopkins.)


'''Oct. 3'''
'''20 Nov 2014'''
* Speaker: Flip Sabes
* Speaker: Haruo Hasoya
* Affiliation: Keck Center/UCSF
* Affiliation: ATR Institute, Japan
* Host: Bruno
* Host: Bruno
* Status: tentative
* Title: TBA
* Title: TBA
* Abstract:
* Abstract:


=== 2007 summer seminars ===
'''9 Dec 2014'''
'''August 21, 2007'''
* Speaker: Dirk DeRidder
* Speaker: Jeremy Lewi
* Affiliation: Dundedin School of Medicine, University of Otago, New Zealand
* Affiliation: Georgia Tech
* Host: Bruno/Walter Freeman
* Host: Amir
* Status: confirmed
* Title: Adaptively optimizing neurophysiology experiments for estimating encoding models
* Title: The Bayesian brain, phantom percepts and brain implants
* Abstract: TBA
 
'''January 14, 2015'''
* Speaker: Kevin O'regan
* Affiliation: CNRS - Université Paris Descartes
* Host: Bruno
* Status: confirmed
* Title: TBA
* Abstract: TBA
 
'''January 21, 2015'''
* Speaker: Adrienne Fairhall
* Affiliation: University of Washington
* Host: Mike Schachter
* Status: confirmed
* Title: TBA
* Abstract: TBA
 
'''January 26, 2015'''
* Speaker: Abraham Peled
* Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology
* Host: Bruno
* Status: confirmed
* Title: Clinical Brain Profiling: A Neuro-Computational psychiatry
* Abstract: TBA


=== 2006/2007 academic year ===
'''January 28, 2015'''
* Speaker: Rich Ivry
* Affiliation: UC Berkeley
* Host: Bruno
* Status: confirmed
* Title: Embodied Decision Making:  System interactions in sensorimotor adaptation and reinforcement learning
* Abstract:


'''May 15, 2007'''
'''February 11, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=48 Ray Guillery]
* Speaker: Mark Lescroart
* Affiliation: University of Madisson, WI/Marmara University
* Affiliation: UC Berkeley
* Host: Fritz
* Host: Karl
* Title: Thalamus and Sensorimotor Aspects of Perception
* Status: tentative
* Title:  
* Abstract:


'''May 8'''
'''February 25, 2015'''
* Speaker: Lokendra Shastri
* Speaker: Steve Chase
* Affiliation: ICSI
* Affiliation: CMU
* Host: Bruno
* Host: Bruno
* Title: Micro-circuits of Episodic Memory: Structure Matches Function in the Hippocampal System
* Status: confirmed
* Title: Joint Redwood/CNEP seminar
* Abstract:
 
'''March 3, 2015'''
* Speaker: Andreas Herz
* Affiliation: Bernstein Center, Munich
* Host: Bruno/Fritz
* Status: confirmed
* Title:  
* Abstract:
 
'''March 3, 2015 - 4:00'''
* Speaker: James Cooke
* Affiliation: Oxford
* Host: Mike Deweese
* Status: confirmed
* Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex
* Abstract:


'''April 24'''
'''March 4, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=59 Jeff Johnson]
* Speaker: Bill Sprague
* Affiliation: UC Davis
* Affiliation: UC Berkeley
* Host: Bruno
* Host: Bruno
* Title: What does EEG tell us about the timecourse of object recognition?
* Status: confirmed
* Title: V1 disparity tuning and the statistics of disparity in natural viewing
* Abstract:


'''April 17, 2007'''
'''March 11, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=56 Steve Waydo]
* Speaker: Jozsef Fiser
* Affiliation: Control & Dynamical Systems, California Institute of Technology
* Affiliation: Central European University
* Host: Bruno
* Host: Bruno
* Title: Explicit Object Representation by Sparse Neural Codes
* Status: confirmed
* Title:  
* Abstract:


'''April 10'''
'''April 1, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=60 Andrew Ng]
* Speaker: Saeed Saremi
* Affiliation: Stanford University
* Affiliation: Salk Inst
* Host: Bruno
* Host: Bruno
* Title: Unsupervised discovery of structure for transfer learning
* Status: confirmed
* Title:  
* Abstract:


'''April 3'''
'''April 15, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=61 Robert Miller]
* Speaker: Zahra M. Aghajan
* Affiliation: Department of Anatomy and Structural Biology, Otago University
* Affiliation: UCLA
* Host: Fritz
* Host: Fritz
* Title: Axonal conduction time and human cerebral laterality
* Status: confirmed
* Title: Hippocampal Activity in Real and Virtual Environments
* Abstract:


'''March 20, 2007'''
'''May 7, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=55 Jeff Hawkins]
* Speaker: Santani Teng
* Affiliation: Numenta
* Affiliation: MIT
* Host: Bruno
* Host: Bruno
* Title: Hierarchical Temporal Memory
* Status: confirmed
* Title: TBA
* Abstract:


'''March 13, 2007'''
'''May 13, 2015'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=54 Chris Wiggins]
* Speaker: Harri Valpola
* Affiliation: Columbia University, NY
* Affiliation: ZenRobotics
* Host: Brian
* Status: Tentative
* Title: TBA
* Abstract
 
'''June 24, 2015'''
* Speaker: Kendrick Kay
* Affiliation: Department of Psychology, Washington University in St. Louis
* Host: Karl
* Status: Confirmed
* Title: Using functional neuroimaging to reveal the computations performed by the human visual system
* Abstract
Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.
 
=== 2013/14 academic year ===
 
'''9 Oct 2013'''
* Speaker: Ekaterina Brocke
* Affiliation: KTH University, Stockholm, Sweden
* Host: Tony
* Host: Tony
* Title:  Optimal signal processing in small stochastic biochemical networks
* Status: confirmed
* Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.
* Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.
 
'''29 Oct 2013 - note: 4:00'''
* Speaker: Mitya Chkolovskii
* Affiliation: HHMI/Janelia Farm
* Host: Bruno
* Status: confirmed
* Title: TBA
* Abstract: TBA
 
'''30 Oct 2013'''
* Speaker: Ilya Nemanman
* Affiliation: Emory University, Departments of Physics and Biology
* Host: Mike DeWeese
* Status: confirmed
* Title: Large N in neural data -- expecting the unexpected.
* Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.
 
'''31 Oct 2013'''
* Speaker: Oriol Vinyals
* Affiliation: UC Berkeley
* Host: Bruno/Brian
* Status: confirmed
* Title: Beyond Deep Learning: Scalable Methods and Models for Learning
* Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.
 
'''6 Nov 2013'''
* Speaker: Garrett T. Kenyon
* Affiliation: Los Alamos National Laboratory, The New Mexico Consortium
* Host: Dylan Paiton
* Status: Confirmed
* Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions
* Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models.  I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.
 
'''14 Nov 2013 (note: Thursday), ***12:30pm*** '''
* Speaker: Geoffrey J Goodhill
* Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia
* Host: Mike DeWeese
* Status: Confirmed
* Title: Computational principles of neural wiring development
* Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.
 
'''4 Dec 2013'''
* Speaker: Zhenwen Dai
* Affiliation: FIAS, Goethe University Frankfurt, Germany.
* Host: Georgios Exarchakis
* Status: Confirmed
* Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach
* Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex.
 
'''11 Dec 2013'''
* Speaker: Kai Siedenburg
* Affiliation: UC Davis, Petr Janata's Lab.
* Host: Jesse Engel
* Status: Confirmed
* Title: Characterizing Short-Term Memory for Musical Timbre
* Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.
 
'''12 Dec 2013'''
* Speaker: Matthias Bethge
* Affiliation: University of Tubingen
* Host: Bruno
* Status: tentative
* Title: TBA
* Abstract: TBA
 
'''22 Jan 2014'''
* Speaker: Thomas Martinetz
* Affiliation: Univ Luebeck
* Host: Bruno/Fritz
* Status: confirmed
* Title: Orthogonal Sparse Coding and Sensing
* Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.
Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a
generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision.
 
'''29 Jan 2014'''
* Speaker: David Klein
* Affiliation: Audience
* Host: Bruno
* Status: confirmed
* Title: TBA
* Abstract: TBA
 
'''5 Feb 2014''' (leave open for Barth/Martinetz seminar)
 
'''12 Feb 2014'''
* Speaker: Ilya Sutskever
* Affiliation: Google
* Host: Zayd
* Status: confirmed
* Title: Continuous vector representations for machine translation
* Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.
 
'''25 Feb 2014'''
* Speaker: Alexander Terekhov
* Affiliation: CNRS - Université Paris Descartes
* Host: Bruno
* Status: confirmed
* Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies
* Abstract:
 
'''12 March 2014'''
* Speaker: Carlos Portera-Cailliau
* Affiliation: UCLA
* Host: Mike
* Status: confirmed
* Title: Circuit defects in the neocortex of Fmr1 knockout mice
* Abstract: TBA
 
'''19 March 2014'''
* Speaker: Dean Buonomano
* Affiliation: UCLA
* Host: Mike
* Status: confirmed
* Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity
* Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.
 
'''26 March 2014'''
* Speaker: Robert G. Smith
* Affiliation: University of Pennsylvania
* Host: Mike S
* Status: confirmed
* Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina
* Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.
 
'''16 April 2014'''
* Speaker: David Pfau
* Affiliation: Columbia
* Host: Bruno
* Status: confirmed
* Title:
* Abstract:
 
'''22 April 2014 *Tuesday*'''
* Speaker: Jochen Braun
* Affiliation: Otto-von-Guericke University, Magdeburg
* Host: Bruno
* Status: confirmed
* Title: Dynamics of visual perception and collective neural activity
* Abstract:
 
'''29 April 2014'''
* Speaker: Guiseppe Vitiello
* Affiliation: University of Salerno
* Host: Fritz/Walter Freeman
* Status: confirmed
* Title: TBA
* Abstract: TBA
 
'''30 April 2014'''
* Speaker: Masataka Watanabe
* Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics
* Host: Gautam Agarwal
* Status: confirmed
* Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis
* Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])
 
If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.
 
Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.
 
Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.
 
Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.
 
1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.
 
2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.
 
'''11 June 2014'''
* Speaker: Stuart Hammeroff
* Affiliation: University of Arizona, Tucson
* Host: Gautam
* Status: confirmed
* Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations
* Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders.
 
References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329
 
'''25 June 2014'''
* Speaker: Peter Loxley
* Affiliation:
* Host: Bruno
* Status: confirmed
* Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system
* Abstract: TBA
 
=== 2012/13 academic year ===
 
'''26 Sept 2012'''
* Speaker: Jason Yeatman
* Affiliation: Department of Psychology, Stanford University
* Host: Bruno/Susana Chung
* Status: confirmed
* Title: The Development of White Matter and Reading Skills
* Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development.  We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.


'''March 6'''
'''8 Oct 2012'''  
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=58 Pietro Perona]
* Speaker: Sophie Deneve
* Affiliation: Caltech
* Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM
* Host: Bruno
* Host: Bruno
* Title: An exploration of visual recognition
* Status: confirmed
* Title: Balanced spiking networks can implement dynamical systems with predictive coding
* Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.


'''March 1'''
* Speaker: Hiroki Asari
* Affiliation: CSL
* Host: Fritz
* Title: Sparse Representations for the Cocktail Party Problem
* Abstract: A striking feature of many sensory processing problems is that there appear to be many more neurons engaged in the internal representations of the signal than in its transduction.  For example, humans have about 30,000 cochlear neurons, but at least a thousand times as many neurons in the auditory cortex. Such apparently redundant internal representations have sometimes been proposed as necessary to overcome neuronal noise.  We instead posit that they directly subserve computations of interest.  Here we provide an example of how sparse overcomplete linear representations can directly solve difficult acoustic signal processing problems, using as an example monaural source separation using solely the cues provided by the differential filtering imposed on a source by its path from its origin to the cochlea (the head-related transfer function, or HRTF).  In contrast to much previous work, the HRTF is used here to separate auditory streams rather than to localize them in space. The experimentally testable predictions that arise from this model--- including a novel method for estimating a neuron's optimal stimulus using data from a multi-neuron recording experiment---are generic, and apply to a wide range of sensory computations.


'''February 20, 2007'''
'''19 Oct 2012'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=53 Yair Weiss]
* Speaker: Gert Van Dijck
* Affiliation: Hebrew University, Jerusalem
* Affiliation: Cambridge
* Host: Tony
* Host: Urs
* Title:  What makes a good model of natural images?
* Status: confirmed
* Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach
* Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.
 
'''Tuesday, 23 Oct 2012'''
* Speaker: Jaimie Sleigh
* Affiliation: University of Auckland
* Host: Fritz/Andrew Szeri
* Status: confirmed
* Title: Is General Anesthesia a failure of cortical information integration
* Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs.  There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade.  It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.
 
'''31 Oct 2012''' (Halloween)
* Speaker: Jonathan Landy
* Affiliation: UCSB
* Host: Mike DeWeese
* Status: Confirmed
* Title: Mean-field replica theory: review of basics and a new approach
* Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc.  Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation,  2) the intuited ``Parisi-ansatz" solution,  3) continued controversies, and 4) a survey of applications (including to neural networks).  Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method.  As an example, I will work out the phase diagram for a simple spin-glass model.  This talk is intended primarily as a tutorial.
 
'''7 Nov 2012'''
* Speaker: Tom Griffiths
* Affiliation: UC Berkeley
* Host:Daniel Little
* Status: Confirmed
* Title: Identifying human inductive biases
* Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.
 
'''19 Nov 2012''' (Monday) (Thanksgiving week)
* Speaker: Bin Yu
* Affiliation: Dept. of Statistics and EECS, UC Berkeley
* Host: Bruno
* Status: confirmed
* Title: Representation of Natural Images in V4
* Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.
In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field.  Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features.
(This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver
and J. Gallant.)
 
'''30 Nov 2012'''
* Speaker:  Yan Karklin
* Affiliation:  NYU
* Host: Tyler
* Status: confirmed
* Title:
* Abstract:
 
'''10 Dec 2012 (note this would be the Monday after NIPS)'''
* Speaker: Marius Pachitariu
* Affiliation: Gatsby / UCL
* Host: Urs
* Status: confirmed
* Title:  NIPS paper "Learning visual motion in recurrent neural networks"
* Abstract: We present a dynamic nonlinear generative model for visual motion based on a
latent representation of binary-gated Gaussian variables connected in a network.
Trained on sequences of images by an STDP-like rule the model learns
to represent different movement directions in different variables. We use an online
approximate inference scheme that can be mapped to the dynamics of networks
of neurons. Probed with drifting grating stimuli and moving bars of light, neurons
in the model show patterns of responses analogous to those of direction-selective
simple cells in primary visual cortex. We show how the computations of the model
are enabled by a specific pattern of learnt asymmetric recurrent connections.
I will also briefly discuss our application of recurrent neural networks as statistical
models of simultaneously recorded spiking neurons.


'''February 13, 2007'''
'''12 Dec 2012'''  
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=52 Tobi Delbruck]
* Speaker: Ian Goodfellow
* Affiliation: Inst of Neuroinformatics, UNI-ETH Zurich
* Affiliation: U Montreal
* Host: Bruno
* Host: Bruno
* Title: Building a high-performance event-based silicon retina leads to new ways to compute vision
* Status: confirmed
* URL: http://siliconretina.ini.uzh.ch
* Title:  
* Abstract:
 
'''7 Jan 2013'''
* Speaker: Stuart Hammeroff
* Affiliation: University of Arizona
* Host: Gautam Agarwal
* Status: confirmed
* Title: Quantum cognition and brain microtubules
* Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.
 
'''Monday 14 Jan 2013, 1:00pm'''
* Speaker: Dibyendu Mandal
* Affiliation: Physics Dept., University of Maryland (Jarzynski group)
* Host: Mike DeWeese
* Status: confirmed
* Title: An exactly solvable model of Maxwell’s demon
* Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.
 
'''23 Jan 2013'''
* Speaker: Carlos Brody
* Affiliation: Princeton
* Host: Mike DeWeese
* Status: confirmed
* Title: Neural substrates of decision-making in the rat
* Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.


'''Jan 23, 2007'''
'''28 Jan 2013'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=53 Giuseppe Vitiello]
* Speaker: Eugene M. Izhikevich
* Affiliation: Department of Physics “E.R.Caianiello”, Salerno University
* Affiliation: Brain Corporation
* Host: Fritz
* Host: Fritz
* Title: Relations between many-body physics and nonlinear brain dynamics
* Status: confirmed
* Title: Spikes
* Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.


'''Jan 9, 2007'''
'''29 Jan 2013'''
* Speaker: Boris Gutkin
* Speaker: Goren Gordon
* Affiliation: University of Paris
* Affiliation: Weizman Intitute
* Host: Fritz
* Host: Fritz
* Status: confirmed
* Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics
* Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.
'''29 Jan 2013'''
* Speaker: Jenny Read
* Affiliation: Institute of Neuroscience, Newcastle University
* Host: Sarah
* Status: confirmed
* Title: Stereoscopic vision
* Abstract: [To be written]
'''7 Feb 2013'''
* Speaker: Valero Laparra
* Affiliation:  University of Valencia
* Host: Bruno
* Status: confirmed
* Title: Empirical statistical analysis of phases in Gabor filtered natural images
* Abstract:
'''20 Feb 2013'''
* Speaker: Dolores Bozovic
* Affiliation: UCLA
* Host: Mike DeWeese
* Status: confirmed
* Title: Bifurcations and phase-locking dynamics in the auditory system
* Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms  that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.
'''27 March 2013'''
* Speaker: Dale Purves
* Affiliation: Duke
* Host: Sarah
* Status: confirmed
* Title: How Visual Evolution Determines What We See
* Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.
'''9 April 2013'''
* Speaker: Mounya Elhilali
* Affiliation: Johns Hopkins
* Host: Tyler
* Status: confirmed
* Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis
* Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.
'''17th of April 2013'''
* Speaker: Wiktor Młynarski
* Affiliation: Max Planck Institute for Mathematics in the Sciences
* Host: Urs
* Status: confirmed
* Title: Statistical Models of Binaural Sounds
* Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however,  possess a rich structure and contain multiple frequency components.  This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.
'''15 May 2013'''
* Speaker: Byron Yu
* Affiliation: CMU
* Host: Bruno/Jose (jointly sponsored with CNEP)
* Status: confirmed
* Title: TBA
* Title: TBA
* Abstract: TBA
'''22 May 2013'''
* Speaker: Bijan Pesaran
* Affiliation: NYU
* Host: Bruno/Jose (jointly sponsored with CNEP)
* Status: confirmed
* Title: TBA
* Abstract: TBA
=== 2011/12 academic year ===
'''15 Sep 2011 (Thursday, at noon)'''
* Speaker: Kathrin Berkner
* Affiliation: Ricoh Innovations Inc.
* Host: Ivana Tosic
* Status: Confirmed
* Title: TBD
* Abstract: TBD
'''21 Sep 2011'''
* Speaker: Mike Kilgard
* Affiliation: UT Dallas
* Host: Michael Silver
* Status: Confirmed
* Title:
* Abstract:
'''27 Sep 2011'''
* Speaker: Moshe Gur
* Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology
* Host: Bruno/Stan
* Status: Confirmed
* Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?
* Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the  brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place;  we perceive a synchronized talking face yet detailed  visual and auditory information are represented at very different brain loci.
'''5 Oct 2011'''
* Speaker: Susanne Still
* Affiliation: University of Hawaii at Manoa
* Host: Jascha
* Status: confirmed
* Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium
* Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.
'''19 Oct 2011'''
* Speaker: Graham Cummins
* Affiliation: WSU
* Host: Jeff Teeters
* Status: Confirmed
* Title:
* Abstract:
'''26 Oct 2011'''
* Speaker: Shinji Nishimoto
* Affiliation: Gallant lab, UC Berkeley
* Host: Bruno
* Status: Confirmed
* Title:
* Abstract:
'''14 Dec 2011'''
* Speaker: Austin Roorda
* Affiliation: UC Berkeley
* Host: Bruno
* Status: Confirmed
* Title: How the unstable eye sees a stable and moving world
* Abstract:


'''Dec 5'''
'''11 Jan 2012'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=47 Tanya Baker]
* Speaker: Ken Nakayama
* Affiliation: U Chicago
* Affiliation: Harvard University
* Host: Kilian
* Host: Bruno
* Title: What Forest Fires Tell Us About the Brain
* Status: confirmed
* Title: Subjective Contours
* Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C,  providing the dominant understanding of how the visual system works at its early stages.  Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).
Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena.  Correspondingly, these areas have become backwater, ignored, leapt over.
Subjective contours, however, remain as vivid as ever, even more so.
Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects.  What’s remarkable is that subjective contours  visibly reveal these inferences.


'''December 1, 2006 1.30pm'''
'''Tuesday, 24 Jan 2012'''
* Informal visit: Nancy Kopell
* Speaker: Aniruddha Das
* Affiliation: Boston University
* Affiliation: Columbia University
* Host: Fritz
* Host: Fritz
* Title: No talk: Informal visit in the afternoon
* Status: confirmed
* Title:  
* Abstract:


'''Nov 28'''
'''22 Feb 2012'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=46 Thomas Dean]
* Speaker: Elad Schneidman
* Affiliation: Department of Neurobiology, Weizmann Institute of Science
* Host: Bruno
* Host: Bruno
* Affiliation: Brown University/Google
* Status: confirmed
* Title: Sparse high order interaction networks underlie learnable neural population codes
* Abstract:
 
'''29 Feb 2012 (at noon as usual)'''
* Speaker: Heather Read
* Affiliation: U. Connecticut
* Host: Mike DeWeese
* Status: confirmed
* Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"
* Abstract: TBD
 
'''1 Mar 2012 (note: Thurs)'''
* Speaker: Daniel Zoran
* Affiliation: Hebrew University, Jerusalem
* Host: Bruno
* Status: confirmed
* Title: TBA
* Title: TBA
* Abstract:


'''Nov 21'''
'''7 Mar 2012'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=45 Urs Koster]
* Speaker: David Sivak
* Affiliation: UCB
* Host: Mike DeWeese
* Status: Confirmed
* Title: TBA
* Abstract:
 
'''8 Mar 2012'''
* Speaker: Ivan Schwab
* Affiliation: UC Davis
* Host: Bruno
* Host: Bruno
* Affiliation: University of Helsinki
* Status: Confirmed
* Title: Towards Multi-Layer Processing of Natural Images
* Title: Evolution's Witness: How Eyes Evolved
* Abstract:


'''Nov 14'''
'''14 Mar 2012'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=42 Andrew D. Straw]
* Speaker: David Sussillo
* Affiliation: Bioengineering, California Institute of Technology
* Affiliation:
* Host: Kilian
* Host: Jascha
* Title: Closed-Loop, Visually-Based Flight Regulation in a Model Fruit Fly
* Status: confirmed
* Title:  
* Abstract:


'''Nov 7'''
'''18 April 2012'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=43 Mitya Chklovskii]
* Speaker: Kristofer Bouchard
* Affiliation: UCSF
* Host: Bruno
* Host: Bruno
* Title: What determines the shape of neuronal arbors?
* Status: confirmed
* Title: Cortical Foundations of Human Speech Production
* Abstract:


'''Oct 31'''
'''23 May 2012''' (rescheduled from April 11)
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=44 Matthias Kaschube]
* Speaker: Logan Grosenick
* Host: Kilian
* Affiliation: Stanford, Deisseroth & Suppes Labs
* Title: A mathematical constant in the design of the visual cortex
* Host: Jascha
* Status: confirmed
* Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics
* Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics.


[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006.
[2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.


'''Oct 3'''
BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=41 Jay McClelland]
* Affiliation: Mind, Brain & Computation/MBC, Psychology Department, Stanford
* Host: Evan
* Title: Graded Constraints in English Word Forms ([http://www.archive.org/details/Redwood_Center_2006_10_03_McClelland video])


'''Sept 25'''
'''7 June 2012''' (Thursday)
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=38 Peter Latham]
* Speaker: Mitya Chklovskii
* Affiliation: Gatsby Unit, UCL
* Affiliation: janelia
* Host: Bruno
* Host: Bruno
* Title: Requiem for the spike ([http://www.archive.org/details/Redwood_Center_2006_09_25_Latham video])
* Status:
* Title:
* Abstract


'''Sept 19'''
'''27 June 2012'''  
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=40 Jerry Feldman]
* Speaker: Jerry Feldman
* Affiliation: ICSI/UC Berkeley
* Affiliation:
* Host: Bruno
* Host: Bruno
* Title: From Molecule to Metaphor: Towards a Unified Cognitive Science ([http://www.archive.org/details/redwood_center_2006_09_19_feldman video])
* Status:
* Title:
* Abstract:
 
'''30 July 2012'''
* Speaker: Lucas Theis
* Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen
* Host: Jascha
* Status: Confirmed
* Title: Hierarchical models of natural images
* Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.
 
(joint work with Reshad Hosseini and Matthias Bethge)
 
=== 2010/11 academic year ===
 
'''02 Sep 2010'''
* Speaker: Johannes Burge
* Affiliation: University of Texas at Austin
* Host: Jimmy
* Status: Confirmed
* Title:
* Abstract:
 
'''8 Sep 2010'''
* Speaker: Tobi Szuts
* Affiliation: Meister Lab/ Harvard U.
* Host: Mike DeWeese
* Status: Confirmed
* Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.
* Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.
 
'''29 Sep 2010'''
* Speaker: Vikash Gilja
* Affiliation: Stanford University
* Host: Charles
* Status: Confirmed
* Title: Towards Clinically Viable Neural Prosthetic Systems.
* Abstract:
 
'''20 Oct 2010'''
* Speaker: Alexandre Francois
* Affiliation: USC
* Host:
* Status: Confirmed
* Title:
* Abstract:
 
'''3 Nov 2010'''
* Speaker: Eric Jonas and Vikash Mansinghka
* Affiliation:  Navia Systems
* Host: Jascha
* Status: Confirmed
* Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications
* Abstract: Complex probabilistic models and Bayesian inference are becoming
increasingly critical across science and industry, especially in
large-scale data analysis. They are also central to our best
computational accounts of human cognition, perception and action.
However, all these efforts struggle with the infamous curse of
dimensionality. Rich probabilistic models can seem hard to write and
even harder to solve, as specifying and calculating probabilities
often appears to require the manipulation of exponentially (and
sometimes infinitely) large tables of numbers.
 
We argue that these difficulties reflect a basic mismatch between the
needs of probabilistic reasoning and the deterministic, functional
orientation of our current hardware, programming languages and CS
theory. To mitigate these issues, we have been developing a stack of
abstractions for natively probabilistic computation, based around
stochastic simulators (or samplers) for distributions, rather than
evaluators for deterministic functions. Ultimately, our aim is to
produce a model of computation and the associated hardware and
programming tools that are as suited for uncertain inference and
decision-making as our current computers are for precise arithmetic.
 
In this talk, we will give an overview of the entire stack of
abstractions supporting natively probabilistic computation, with
technical detail on several hardware and software artifacts we have
implemented so far. we will also touch on some new theoretical results
regarding the computational complexity of probabilistic programs.
Throughout, we will motivate and connect this work to some current
applications in biomedical data analysis and computer vision, as well
as potential hypotheses regarding the implementation of probabilistic
computation in the brain.
 
This talk includes joint work with Keith Bonawitz, Beau Cronin,
Cameron Freer, Daniel Roy and Joshua Tenenbaum.
 
BRIEF BIOGRAPHY
 
Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a
venture-funded startup company building natively probabilistic
computing machines. He spent 10 years at MIT, eventually earning an
SB. in Mathematics, an SB. in Computer Science, an MEng in Computer
Science, and a PhD in Computation. He held graduate fellowships from
the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won
the 2009 MIT George M. Sprowls award for best dissertation in computer
science. He currently serves on DARPA's Information Science and
Technology (ISAT) Study Group.
 
Eric Jonas is a co-founder of Navia Systems, responsible for in-house
accelerated inference research and development. He spent ten years at
MIT, where he earned SB degrees in electrical engineering and computer
science and neurobiology, an MEng in EECS, with a neurobiology PhD
expected really soon. He’s passionate about biological applications
of probabilistic reasoning and hopes to use Navia’s capabilities to
combine data from biological science, clinical histories, and patient
outcomes into seamless models.
 
'''8 Nov 2010'''
* Speaker: Patrick Ruther
* Affiliation:  Imtek, University of Freiburg
* Host: Tim
* Status: Confirmed
* Title: TBD
* Abstract: TBD


'''Sept 5'''
'''10 Nov 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=39 Tom Griffiths]
* Speaker: Aurel Lazar
* Affiliation: Cogsci/UC Berkeley
* Affiliation: Department of Electrical Engineering, Columbia University
* Host: Bruno
* Host: Bruno
* Title: Natural Statistics and Human Cognition ([http://www.archive.org/details/Redwood_Center_2006_09_05_Griffiths video])
* Status: Confirmed
* Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons
* Abstract: We first present a general framework for the reconstruction of natural video
scenes encoded with a population of spiking neural circuits with random thresholds.
The visual encoding system consists of a bank of filters, modeling the visual
receptive fields, in cascade with a population of neural circuits, modeling encoding
with spikes in the early visual system.
The neuron models considered include integrate-and-fire neurons and ON-OFF
neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed
to be random. We show that for both time-varying and space-time-varying stimuli neural
spike encoding is akin to taking noisy measurements on the stimulus.
Second, we formulate the reconstruction problem as the minimization of a
suitable cost functional in a finite-dimensional vector space and provide an explicit
algorithm for stimulus recovery. We also present a general solution using the theory of
smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both
synthetic video as well as for natural scenes and show that the quality of the
reconstruction degrades gracefully as the threshold variability of the neurons increases.
Third, we demonstrate a number of simple operations on the original visual stimulus
including translations, rotations and zooming. All these operations are natively executed
in the spike domain. The processed spike trains are decoded for the faithful recovery
of the stimulus and its transformations.
Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley
neurons.
References:
Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou,
Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010,
Special Issue on Mathematical Models of Visual Coding,
http://dx.doi.org/10.1016/j.visres.2010.03.015
Aurel A. Lazar,
Population Encoding with Hodgkin-Huxley Neurons,
IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010,
Special Issue on Molecular Biology and Neuroscience,
http://dx.doi.org/10.1109/TIT.2009.2037040
 
'''11 Nov 2010''' (UCB holiday)
* Speaker: Martha Nari Havenith
* Affiliation: UCL
* Host: Fritz
* Status: Confirmed
* Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?
* Abstract:
 
'''19 Nov 2010'''  (note: on Friday because of SFN)
* Speaker: Dan Butts
* Affiliation: UMD
* Host: Tim
* Status: Confirmed
* Title: Common roles of inhibition in visual and auditory processing.
* Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas.  In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.
 
'''24 Nov 2010'''
* Speaker:  Eizaburo Doi
* Affiliation: NYU
* Host: Jimmy
* Status: Confirmed
* Title:
* Abstract:
 


'''Aug 1'''
'''29 Nov 2010 - informal talk'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=35 Carol Whitney]
* Speaker: Eero Lehtonen
* Affiliation: U Maryland
* Affiliation: UTU Finland
* Host: Bruno
* Host: Bruno
* Title: What can Visual Word Recognition Tell us about Visual Object Recognition? ([http://www.archive.org/details/Redwood_Center_2006_08_01_Whitney video])
* Status: Confirmed
* Title: Memristors
* Abstract:


'''July 18'''
'''1 Dec 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=37 Evan Smith]
* Speaker: Gadi Geiger
* Affiliation: Redwood Center/Stanford
* Affiliation: MIT
* Host: Fritz
* Status: Confirmed
* Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics
* Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.
 
 
'''13 Dec 2010'''
* Speaker: Jorg Lueke
* Affiliation: FIAS
* Host: Bruno
* Host: Bruno
* Title: Efficient auditory coding
* Status: Confirmed
* Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data
* Abstract:  In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions.  Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise.  In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings.  Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.
 
'''15 Dec 2010'''
* Speaker: Claudia Clopath
* Affiliation: Universite Paris Decartes
* Host: Fritz
* Status: Confirmed
* Title:  
* Abstract:


=== 2005/2006 academic year ===


'''June 20'''
'''18 Jan 2011'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=34 Vincent Bonin]
* Speaker: Siwei Lyu
* Affiliation: Smith Kettlewell Institute
* Affiliation: Computer Science Department, University at Albany, SUNY
* Host: Thomas
* Host: Bruno
* Title:
* Status: confirmed
* Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation
* Abstract:


'''June 15'''
'''19 Jan 2011'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=36 Philip Low]
* Speaker: David Field (informal talk)
* Affiliation: Salk Institute
* Affiliation:  
* Host: Tony
* Host: Bruno
* Title: A New Way To Look At Sleep
* Status: Tentative
* Title:  
* Abstract:


'''May 2'''
'''25 Jan 2011'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=32 Dileep George]
* Speaker: Ruth Rosenholtz
* Affiliation: Numenta
* Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT
* Host: Bruno
* Host: Bruno
* Title: Hierarchical, cortical memory architecture for pattern recognition
* Status: Confirmed
* Title: What your visual system sees where you are not looking
* Abstract:
 
'''26 Jan 2011'''
* Speaker: Ernst Niebur
* Affiliation: Johns Hopkins U
* Host: Fritz
* Status: Confirmed
* Title:
* Abstract:
 
'''16 March 2011'''
* Speaker: Vladimir Itskov
* Affiliation: University of Nebraska-Lincoln
* Host: Chris
* Status: Confirmed
* Title:
* Abstract:
 
'''23 March 2011'''
* Speaker: Bruce Cumming
* Affiliation: National Institutes of Health
* Host: Ivana
* Status: Confirmed
* Title: TBD
* Abstract:
 
'''27 April 2011'''
* Speaker: Lubomir Bourdev
* Affiliation: Computer Science, UC Berkeley
* Host:Bruno
* Status: Confirmed
* Title: "Poselets and Their Applications in High-Level Computer Vision Problems"
* Abstract:


'''April 18'''
'''12 May 2011 (note: Thursday)'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=30 Risto Miikkulainen]
* Speaker: Jack Culpepper
* Affiliation: The University of Texas at Austin
* Affiliation: Redwood Center/EECS
* Host: Bruno
* Host: Bruno
* Title: Computational maps in the visual cortex ([http://www.archive.org/details/redwood_center_2006_04_18_miikkulainen video])
* Status: Confirmed
* Title: TBA
* Abstract:


'''April 11'''
'''26 May 2011'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=29 Charles Anderson]
* Speaker: Ian Stevenson
* Affiliation: Washington University School of Medicine
* Affiliation: Northwestern University
* Host: Bruno
* Host: Bruno
* Title: Population Coding in V1 ([http://www.archive.org/details/redwood_center_2006_04_11_anderson video])
* Status: Confirmed
* Title: Explaining tuning curves by estimating interactions between neurons
* Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.


'''April 10'''
'''1 June 2011'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=33 Charles Anderson]
* Speaker: Michael Oliver
* Affiliation: Washington University School of Medicine
* Affiliation: Gallant lab
* Host: Bruno
* Host: Bruno
* Title: A Comparison of Neurobiological and Digital Computation ([http://www.archive.org/details/redwood_center_2006_04_10_anderson video])
* Status: Tentative
* Title:  
* Abstract:


'''April 4'''
'''8 June 2011'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=18 Odelia Schwartz]
* Speaker: Alyson Fletcher
* Affiliation: The Salk Institute
* Affiliation: UC Berkeley
* Host: Bruno
* Host: Bruno
* Title: Natural images and cortical representation
* Status: tentative
* Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity
* Abstract:  Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data.  In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli.  I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation.  The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data.  In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron.  A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers  improvement over previous compressed sensing methods.  The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.


'''March 21'''
=== 2009/10 academic year ===
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=26 Mark Schnitzer]
* Affiliation: Stanford University
* Host: Amir
* Title: In vivo microendoscopy and computational modeling studies of mammalian brain circuits


'''March 15'''
'''2 September 2009'''  
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=31 Mate Lengyel]
* Speaker: Keith Godfrey
* Affiliation: Gatsby Unit/UCL London
* Affiliation: University of Cambridge
* Host: fritz
* Host: Tim
* Title: Bayesian model learning in human visual perception ([http://www.archive.org/details/redwood_center_2006_03_15_lengyel video])
* Status: Confirmed
* Title: TBA
* Abstract:


'''March 14'''
'''7 October 2009'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=22 Mate Lengyel]
* Speaker: Anita Schmid
* Affiliation: Gatsby Unit/UCL London
* Affiliation: Cornell University
* Host: fritz
* Host: Kilian
* Title: Firing rates and phases in the hippocampus: what are they good for? ([http://www.archive.org/details/redwood_center_2006_03_14_lengyel video])
* Status: Confirmed
* Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time
* Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.


'''March 7'''
'''28 October 2009'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=25 Michael Wu]
* Speaker: Andrea Benucci
* Affiliation: Gallant lab/UC Berkeley
* Affiliation: Institute of Ophthalmology, University College London
* Host: Bruno
* Host: Bruno
* Title: A Unified Framework for Receptive Field Estimation
* Status: Confirmed
* Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex
* Abstract:  It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1).  I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information.  I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation.  While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.


'''February 28'''
'''12 November 2009 (Thursday)'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=23 Dario Ringach]
* Speaker: Song-Chun Zhu
* Affiliation: UCLA
* Affiliation: UCLA
* Host: thomas
* Host: Jimmy
* Title: Population dynamics in primary visual cortex
* Status: Confirmed
* Title:  
* Abstract:


'''February 21'''
'''18 November 2009'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=20 Gerard Rinkus]
* Speaker: Dan Graham
* Affiliation: Brandeis University
* Affiliation: Dept. of Mathematics, Dartmouth College
* Host: Bruno
* Host: Bruno
* Title: Hierarchical Sparse Distributed Representations of Sequence Recall and Recognition ([http://www.archive.org/details/redwood_center_2006_02_21_rinkus video])
* Status: Confirmed
* Title: The Packet-Switching Brain: A Hypothesis
* Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.


'''February 14'''
'''16 December 2009'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=27 Jack Cowan]
* Speaker: Pietro Berkes
* Affiliation: U Chicago
* Affiliation: Volen Center for Complex Systems, Brandeis University
* Host: Bruno
* Host: Bruno
* Title: Spontaneous pattern formation in large scale brain activity: what visual migraines and hallucinations tell us about the brain ([http://www.archive.org/details/redwood_center_2006_02_14_cowan video])
* Status: Confirmed
* Title: Generative models of vision: from sparse coding toward structured models
* Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes.  This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models.  In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding.  In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.


'''February 7'''
'''6 January 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=19 Christian Wehrhahn]
* Speaker: Susanne Still
* Affiliation: Max Planck Institute for Biological Cybernetics, Tuebingen, Germany
* Affiliation: U of Hawaii
* Host: Tony
* Host: Fritz
* Title: Seeing blindsight: motion at  isoluminance?
* Status: Confirmed
* Title:  
* Abstract:


'''January 23 (Monday)'''
'''20 January 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=17 Read Montague]
* Speaker: Tom Dean
* Affiliation: Baylor College of Medicine
* Affiliation: Google
* Host: Bruno
* Host: Bruno
* Title: Abstract plans and reward signals in a multi-round trust game
* Status: Confirmed
* Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors
* Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.


'''January 17'''
'''27 January 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=21 Erhardt Barth]
* Speaker: David Philiponna
* Affiliation: Institute for Neuro- and Bioinformatics, Luebeck, Germany
* Affiliation: Paris
* Host: Bruno
* Host: Bruno
* Title: Guiding eye movements for better communication ([http://www.archive.org/details/redwood_center_2006_01_17_barth video])
* Status: Confirmed
* Title:  
* Abstract:
 
''''24 Feburary 2010'''
* Speaker: Gordon Pipa
* Affiliation: U Osnabrueck/MPI Frankfurt
* Host: Fritz
* Status: Confirmed
* Title:
* Abstract:
 
'''3 March 2010'''
* Speaker: Gaute Einevoll
* Affiliation: UMB, Norway
* Host: Amir
* Status: Confirmed
* Title: TBA
* Abstract: TBA
 


'''January 3'''
'''4 March 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=16 Dan Butts]
* Speaker: Harvey Swadlow
* Affiliation: Harvard University
* Affiliation:
* Host: Thomas
* Host: Fritz
* Title: "Temporal hyperacuity": visual neuron function at millisecond time resolution
* Status: Confirmed
* Title:
* Abstract:
 
'''8 April 2010'''
* Speaker: Alan Yuille
* Affiliation: UCLA
* Host: Amir
* Status: Confirmed (for 1pm)
* Title:
* Abstract:
 
'''28 April 2010'''
* Speaker: Dharmendra Modha - cancelled
* Affiliation: IBM
* Host:Fritz
* Status: Confirmed
* Title:  
* Abstract:
 
'''5 May 2010'''
* Speaker: David Zipser
* Affiliation: UCB
* Host: Daniel Little
* Status: Tentative
* Title: Brytes 2:
* Abstract:


'''December 13, 2005'''
Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=11 Paul Rhodes]
* Affiliation: Stanford University
* Title: Simulations of a thalamocortical column with compartment model cells and dynamic synapses ([http://www.archive.org/details/redwood_center_2005_12_13_rhodes video])


'''December 6, 2005'''
In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.
* Speaker: Special debate between [http://redwood.berkeley.edu/seminar-info.php?id=15 Walter J. Freeman] and [http://redwood.berkeley.edu/seminar-info.php?id=14 Robert Hecht-Nielsen]
* Affiliation: University of California at Berkeley (Walter). University of California at San Diego (Robert)
* Title: Waves or words in neocortex
* Video: [http://www.archive.org/details/RedwoodCenterforTheoreticalNeuroscienceWalterJFreemanAfieldtheoreticapproachtounderstandingneocortex Walter], [http://www.archive.org/details/RedwoodCenterforTheoreticalNeuroscienceRobertHechtNielsenConfabulationTheory Robert]


'''November 29, 2005'''
'''12 May 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=13 Stanley Klein]
* Speaker: Frank Werblin  (Redwood group meeting - internal only)
* Affiliation: School of Optometry, UC Berkeley  
* Affiliation: Berkeley
* Title: Limits of Vision and psychophysical methods ([http://www.archive.org/details/redwood_center_2005_11_29_klein video])
* Host: Bruno
* Status: Tentative
* Title:  
* Abstract:


'''November 22, 2005'''
'''19 May 2010'''
* Speaker: [http://redwood.berkeley.edu/seminar-info.php?id=12 Scott Makeig]
* Speaker: Anna Judith
* Affiliation: Swartz Center for Computational Neuroscience, Institute for Neural Computation, UCSD
* Affiliation: UCB
* Title: Viewing event-related brain dynamics from the top down
* Host: Daniel Little (Redwood Lab Meeting - internal only)
* Status: confirmed
* Title:  
* Abstract:

Latest revision as of 19:08, 7 September 2018

Instructions

DON'T POST YOUR SEMINARS HERE!!!!! USE THE NEW WEBSITE: redwood.berkeley.edu/wp-admin. The seminar schedule and scheduling instructions are under Internal Resources.

  1. Check the internal calendar (here) for a free seminar slot. Seminars are usually Wednesdays at noon, but it is flexible in case there is a day that works better for the speaker. However, it is usually best to avoid booking multiple speakers in the same week - it leads to "seminar burnout" and reduced attendance. But use your own judgement here - if its a good opportunity and that's the only time that works then go ahead with it.
  2. Once you have proposed a date to a speaker, fill in the speaker information under the appropriate date (or change if necessary). Use the status field to indicate whether the date is tentative or confirmed. Please also include your name as host in case somebody wants to contact you.
  3. Once the invitation is confirmed with the speaker, change the status field to 'confirmed'. Also notify the webmaster (Bruno) [1] that we have a confirmed speaker so that he/she can update the public web page. Please include a title and abstract.
  4. Natalie (HWNI) checks our web page regularly and will send out an announcement a week before and also include with the weekly neuro announcements, but if you don't get it confirmed until the last minute then make sure to email Natalie [2] as well to give her a heads up so she knows to send out an announcement in time.
  5. If the speaker needs accommodations you should contact Natalie [3] to reserve a room at the faculty club. Tell her its for a Redwood speaker so she knows how to bill it.
  6. During the visit you will need to look after the visitor, schedule visits with other labs, make plans for lunch, dinner, etc., and introduce at the seminar (don't ask Bruno to do this at the last moment). Save receipts for any meals you paid for.
  7. After the seminar and before the speaker leaves, make sure to give them Natalie's contact info and have them email her their receipts, explaining its for reimbursement for a Redwood seminar. Natalie will then process the reimbursement. She can also help you with getting reimbursed for any expenses you incurred for meals and entertainment.

Tentative / Confirmed Speakers

January 31 2018

  • Speaker: Joel Makin
  • Time: 12:00
  • Affiliation: UCSF
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

February 6, 2018

  • Speaker: Leenoy Mesulam
  • Time: 12:00
  • Affiliation: Princeton University
  • Host: Fritz
  • Status: confirmed
  • Title: The 1000+ neurons challenge: emergent simplicity in (very) large populations
  • Abstract: Recent technological progress has dramatically increased our access to the neural activity underlying memory-related tasks. These complex high-dimensional data call for theories that allow us to identify signatures of collective activity in the networks that are crucial for the emergence of cognitive functions. As an example, we study the neural activity in dorsal hippocampus as a mouse runs along a virtual linear track. One of the dominant features of this data is the activity of place cells, which fire when the animal visits particular locations. During the first stage of our work we used a maximum entropy framework to characterize the probability distribution of the joint activity patterns observed across ensembles of up to 100 cells. These models, which are equivalent to Ising models with competing interactions, make surprisingly accurate predictions for the activity of individual neurons given the state of the rest of the network, and this is true both for place cells and for non-place cells. For the second stage of our work we study networks of ~ 1500 neurons. To address this much larger system, we use different coarse graining methods, in the spirit of the renormalization group, to uncover macroscopic features the network. We see hints of scaling and of behavior that is controlled by a non-trivial fixed point. Perhaps, then, there is emergent simplicity even in these very complex systems of real neurons in the brain.


!!! NOTE: going forward for spring term 2018, please avoid Wednesday for scheduling redwood seminars as we have the Simons brain and computation seminars that morning, so it makes for packed day to have both !!!


February 21, 2018

  • Speaker: Tianshi Wang
  • Time: 12:00
  • Affiliation: Berkeley
  • Host: Bruno
  • Status: tentative
  • Title:
  • Abstract:

April 2, 2018

  • Speaker: Pascal Fries
  • Time: 12:00
  • Affiliation: Berkeley
  • Host: Bruno/Dana Ballard
  • Status: tentative
  • Title:
  • Abstract:

September 12, 2018

  • Speaker: Wujie Zhang
  • Time: 12:00
  • Affiliation: Yartsev lab, Berkeley
  • Host: Guy
  • Status: Confirmed
  • Title:
  • Abstract:

September 17, 2018

  • Speaker: Juergen Jost
  • Time: 12:00
  • Affiliation: MPI for Mathematics in the Sciences, Leipzig
  • Host: Fritz
  • Status: confirmed
  • Title:
  • Abstract:

TBD, sometime in the Fall

  • Speaker: Evangelos Theodorou
  • Time: TBD
  • Affiliation: GeorgiaTech
  • Host: Mike/Dibyendu Mandal
  • Status: planning
  • Title: TBD
  • Abstract: TBD


TBD, 2016

  • Speaker: Alexander Stubbs
  • Time: 12:00
  • Affiliation: UC Berkeley
  • Host: Bruno/Michael Levy
  • Status: tentative
  • Title: Could chromatic aberration allow for an alternative evolutionary pathway towards color vision?
  • Abstract: We present a mechanism by which organisms with only a single photoreceptor, which have a monochromatic view of the world, can achieve color discrimination. An off-axis pupil and the principle of chromatic aberration (where different wavelengths come to focus at different distances behind a lens) can combine to provide “color-blind” animals with a way to distinguish colors. As a specific example, we constructed a computer model of the visual system of cephalopods (octopus, squid, and cuttlefish) that have a single unfiltered photoreceptor type. We compute a quantitative image quality budget for this visual system and show how chromatic blurring dominates the visual acuity in these animals in shallow water. This proposed mechanism is consistent with the extensive suite of visual/behavioral and physiological data that has been obtained from cephalopod studies and offers a possible solution to the apparent paradox of vivid chromatic behaviors in color blind animals. Moreover, this proposed mechanism has potential applicability in organisms with limited photoreceptor complements, such as spiders and dolphins.

Previous Seminars

2017/18 academic year

July 10, 2017

  • Speaker: David Field
  • Time: 6:00pm
  • Affiliation: Cornell
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

July 18, 2017

  • Speaker: Jordi Puigbò
  • Time: 12:30
  • Affiliation: Synthetic, Perceptive, Emotive and Cognitive Systems (SPECS) lab, Dept. of Information and Telecommunication Technologies, Universitat Pompeu Fabra (Barcelona - Spain)
  • Host: Vasha
  • Status: Confirmed
  • Title: State Dependent Modulation of Perception Based on a Computational Model of Conditioning
  • Abstract: The embodied mammalian brain evolved to adapt to an only partially known and knowable world. The adaptive labeling of the world is critically dependent on the neocortex which in turn is modulated by a range of subcortical systems such as the thalamus, ventral striatum, and the amygdala. A particular case in point is the learning paradigm of classical conditioning, where acquired representations of states of the world such as sounds and visual features are associated with predefined discrete behavioral responses such as eye blinks and freezing. Learning progresses in a very specific order, where the animal first identifies the features of the task that are predictive of a motivational state and then forms the association of the current sensory state with a particular action and shapes this action to the specific contingency. This adaptive feature selection has both attentional and memory components, i.e. a behaviorally relevant state must be detected while its representation must be stabilized to allow its interfacing to output systems. Here we present a computational model of the neocortical systems that underlie this feature detection process and its state-dependent modulation mediated by the amygdala and its downstream target, the nucleus basalis of Meynert. Specifically, we analyze how amygdala-driven cholinergic modulation switches between two perceptual modes, one for exploitation of learned representations and prototypes and another one for the exploration of new representations that provoked these change in the motivational state, presenting a framework for rapid learning of behaviorally relevant perceptual representations. Beyond reward-driven learning that is mostly based on exploitation, this paper presents a complementary mechanism for quick exploratory perception and learning grounded in the understanding of fear and surprise.

Aug. 14, 2017

  • Speaker: Brent Doiron
  • Time: 12:00
  • Affiliation:
  • Host: Bruno/Hillel
  • Status: tentative
  • Title:
  • Abstract:

Aug. 15, 2017

  • Speaker: Ken Miller
  • Time: 12:00
  • Affiliation: Columbia
  • Host: Bruno/Hillel
  • Status: confirmed
  • Title:
  • Abstract:

Aug. 16, 2017

  • Speaker: Joshua Vogelstein
  • Time: 12:00
  • Affiliation: JHU
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

Sept. 6, 2017

  • Speaker: Gerald Friedland
  • Time: 12:00
  • Affiliation: UC Berkeley
  • Host: Bruno/Jerry
  • Status: confirmed
  • Title: A Capacity Scaling Law for Artificial Neural Networks
  • Abstract:

Sept. 20, 2017

  • Speaker: Carl Pabo
  • Time: 12:00
  • Affiliation:
  • Host: Bruno
  • Status: confirmed
  • Title: Human Thought and the Human Future
  • Abstract:

Oct. 11, 2017

  • Speaker: Deepak Pathak and Pulkit Agrawal
  • Time: 12:30 PM
  • Affiliation: UC Berkeley, BAIR
  • Host: Mayur Mudigonda
  • Status: Confirmed
  • Title: Curiosity and Rewards
  • Abstract:

October 25th 2017

  • Speaker: Caleb Kalmere
  • Time: 12:00
  • Affiliation: Rice
  • Host: Guy Isely
  • Status: Confirmed
  • Title: Unsupervised Inference of the Hippocampal Population Code from Offline Activity
  • Abstract: TBD-- HMM-based hippocampal replay

Nov. 8, 2017

  • Speaker: John Harte
  • Time: 12:00
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title: Maximum Entropy and the Inference of Patterns in Nature
  • Abstract:

Nov. 16, 2017

  • Speaker: Jeff Hawkins
  • Time: 12:00
  • Affiliation: Numenta
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

November 29th 2017

  • Speaker: Joel Kaardal
  • Time: 12:00
  • Affiliation: Salk
  • Host: Bruno/Frederic Theunissen
  • Status: Confirmed
  • Title:
  • Abstract:

December 13, 2017

  • Speaker: Zhaoping Li
  • Time: 12:00
  • Affiliation: UCL
  • Host: Bruno/Frederic Theunissen
  • Status: confirmed
  • Title:
  • Abstract:

December 19, 2017

  • Speaker: Shaowei Lin
  • Time: 12:00
  • Affiliation:
  • Host: Chris Hillar
  • Status: confirmed
  • Title: Biologically plausible deep learning for recurrent spiking neural networks.
  • Abstract: Despite widespread success in deep learning, backpropagation has been criticized for its biological implausibility. To address this issue, Hinton and Bengio have suggested that our brains are performing approximations of backpropagation, and some of their proposed models seem promising. In the same vein, we propose a different model for learning in recurrent neural networks (RNNs), known as McCulloch-Pitts processes. As opposed to traditional models for RNNs (such as LSTMs) which are based on continuous-valued neurons operating in discrete time, our model consists of discrete-valued (spiking) neurons operating in continuous time. Through our model, we are able to derive extremely simple and local learning rules, which directly explain experimental results in Spike-Timing-Dependent Plasticity (STDP).

Jan. 24, 2018

  • Speaker: Miguel Gredilla
  • Time: 12:00
  • Affiliation: Vicarious
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

2016/17 academic year

Sept. 7, 2016

  • Speaker: Dan Stowell
  • Time: 12:00
  • Affiliation: Queen Mary, University of London
  • Host: Frederic Theunissen
  • Status: confirmed
  • Title:
  • Abstract:

Sept. 8, 2016

  • Speaker: Barb Finlay
  • Time: 12:00
  • Affiliation: Cornell Univ
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

Sept. 27, 2016

  • Speaker: Yoshua Bengio
  • Time: 11:00
  • Affiliation: Univ Montreal
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

Oct. 12, 2016

  • Speaker: Paul Rhodes
  • Time: 4:00
  • Affiliation: Specific Technologies
  • Host: Dylan/Bruno
  • Status: confirmed
  • Title: A novel and important problem in spatiotemporal pattern classification
  • Abstract: Specific Technologies uses a sensor response that consists of a vector time series, a spatiotemporal fingerprint, to classify bacteria at the strain level during their growth. The identification of resistant strains of bacteria has become one of the world's great problems (here is a link to a $20M prize that the US govt has issued: https://www.nih.gov/news-events/news-releases/federal-prize-competition-seeks-innovative-ideas-combat-antimicrobial-resistance). We are using deep convolutional nets to do this classification, but they are instantaneous, and so do not capture the temporal patterns that are often at the core of what differentiates strains. So using the full temporal character of the sensor response time series is a cutting edge neural ML problem, and important to society too.

Oct. 25, 2016

  • Speaker: Douglas L. Jones
  • Time: 2:00
  • Affiliation: ECE Department, University of Illinois at Urbana-Champaign
  • Host: Bruno
  • Status: confirmed
  • Title: Optimal energy-efficient coding in sensory neurons
  • Abstract: Evolutionary pressure suggests that the spike-based code in the sensory nervous system should satisfy two opposing constraints: 1) minimize signal distortion in the encoding process (i.e., maintain fidelity) by keeping the average spike rate as high as possible, and 2) minimize the metabolic load on the neuron by keeping the average spike rate as low as possible. We hypothesize that selective pressure has shaped the biophysics of a neuron to satisfy these conflicting demands. An energy-fidelity trade-off can be obtained through a constrained optimization process that achieves the lowest signal distortion for a given constraint on the spike rate. We derive the asymptotically optimal average-energy-constrained neuronal source code and show that it leads to a dynamic threshold that functions as an internal decoder (reconstruction filter) and adapts a spike-firing threshold so that spikes are emitted only when the coding error reaches this threshold. A stochastic extension is obtained by adding internal noise (dithering, or stochastic resonance) to the spiking threshold. We show that the source-coding neuron model i) reproduces experimentally observed spike-times in response to a stimulus, and ii) reproduces the serial correlations in the observed sequence of inter-spike intervals, using data from a peripheral sensory neuron and a central (cortical) somatosensory neuron. Finally, we show that the spike-timing code, although a temporal code, is in the limit of high firing rates an instantaneous rate code and accurately predicts the peri-stimulus time histogram (PSTH). We conclude by suggesting possible biophysical (ionic) mechanisms for this coding scheme.

October 26, 2016

  • Speaker: Eric Jonas
  • Time: 12:00
  • Affiliation: UC Berkeley
  • Host: Charles Frye
  • Status: confirmed
  • Title: Could a neuroscientist understand a microprocessor?
  • Abstract: There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current computational approaches in neuroscience may fall short of producing meaningful models of the brain. We discuss several obvious shortcomings with this model, and ways that they might be addressed, both experimentally and computationally.
  • Bio: Eric Jonas is currently a postdoc in computer science at UC Berkeley working with Ben Recht on machine learning for scientific data acquisition. He earned his PhD in Computational Neuroscience, M. Eng in Electrical Engineering, BS in Electrical Engineering and Computer Science, and BS in Neurobiology, all from MIT. Prior to his return to academia, he was founder and CEO of Prior Knowledge, a predictive database company which was acquired in 2012 by Salesforce.com, where he was Chief Predictive Scientist until 2014. In 2015 he was named one of the top rising stars in bioengineering by the Defense Department’s Advanced Research Projects Agency (DARPA).

Nov. 9, 2016

  • Speaker: Pulkit Agrawal
  • Time: 12:00
  • Affiliation: EECS, UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

Nov. 16, 2016

  • Speaker: Sebastian Musslick
  • Time: 12:00
  • Affiliation: Princeton Neuroscience Institute (Princeton University)
  • Host: Brian Cheung
  • Status: confirmed
  • Title: Parallel Processing Capability Versus Efficiency of Representation in Neural Network Architectures
  • Abstract: One of the most salient and well-recognized features of human goal-directed behavior is our limited ability to conduct multiple demanding tasks at once. Why is this? Some have suggested it reflects metabolic limitations, or structural ones. However, both explanations are unlikely. The brain routinely demonstrates the ability to carry out a multitude of processes in an enduring and parallel manner (walking, breathing, listening). Why, in contrast, is its capacity for allocating attention to control-demanding tasks - such a critical and powerful function - so limited? In the first part of my talk I will describe a computational framework that explains limitations of parallel processing in neural network architectures as the result of cross-talk between shared task representations. Using graph-theoretic analyses we show that the parallel processing (multitasking) capability of two-layer networks drops precipitously as a function of task pathway overlap, and scales highly sublinearly with network size. I will describe how this analysis can be applied to task representations encoded in neural networks or neuroimaging data, and show how it can be used to predict both concurrent and sequential multitasking performance in trained neural networks based on single task representations. Our results suggest that maximal parallel processing performance is achieved by segregating task pathways, by separating the representations on which they rely. However, there is a countervailing pressure for pathways to intersect: the re-use of representations to facilitate learning of new tasks. In the second part of my talk I will demonstrate a tradeoff between learning efficiency and parallel processing capability in neural networks. It can be shown that weight priors on learned task similarity improve learning speed and generalization but lead to strong constraints on parallel processing capability. These findings will be contrasted with an ongoing behavioral study by assessing learning and multitasking performance of human subjects across tasks with varying degrees of feature-overlap.

Nov 30, 2016

  • Speaker: Marcus Rohrbach
  • Time: 12:00
  • Affiliation: EECS, UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

March 1st, 2017

  • Speaker: Sahar Akram
  • Time: 12:00
  • Affiliation: Starkey Hearing Research Center
  • Host: Shariq
  • Status: Confirmed
  • Title: Real-Time & Adaptive Auditory Neural Processing
  • Abstract: Decoding the dynamics of brain activity underlying conscious behavior is one of the key questions in systems neuroscience. Sensory neurons, such as those in the auditory system, can undergo rapid and task-dependent changes in their response characteristics during attentive behavior, and thereby result in functional changes in the system over time. In order to quantify human’s conscious experience, neuroimaging techniques such as electroencephalography (EEG) and magnetoencephalography (MEG) are widely used to record the neural activity from the brain with millisecond temporal resolution. Therefore, a dynamic decoding framework on par with the sampling resolution of EEG/MEG is crucial in order to better understand the neural correlates underlying sophisticated cognitive functions such as attention. I will talk about two recent attempts on real-time decoding of brain neural activity during a competing auditory attention task, using Bayesian hierarchical modeling and adaptive signal processing.

Mar 2, 2017

  • Speaker: Joszef Fiser
  • Time: 12:00
  • Affiliation:
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

Mar 22, 2017

  • Speaker: Michael Frank
  • Time: 12:00
  • Affiliation: Magicore Systems
  • Host: Dylan
  • Status: Confirmed
  • Title: The Future of the Multi-core Platform Task-Superscalar Extensions to Von-Neumann Architecture and Optimization for Neural Networks
  • Abstract: Technology scaling had been carrying computer science thru the second half of the 20th century until single CPU performance started leveling off, after which multi- and many-core processors, including GPUs, emerged as the substrate for high performance computing. Mobile market implementations followed this trend and today you might be carrying a phone with more than 16 different processors. For power efficiency reasons, many of the cores are specialized to perform limited functions (such as modem or connectivity control, graphics rendering, or future neural-network acceleration) with most mainstream phones containing four or more general purpose processors. As Steve Jobs insightfully commented almost a decade ago, “The way the processor industry is going is to add more and more cores, but nobody knows how to program those things.” Jobs was correct, programming these multiprocessor systems has become a challenge and several programming models have been proposed in academia to address this issue. Power and thermals are also an ever present thorn to mass market applications. Through the years, CPUs based on the von-Neumann architecture have fended off attacks from many directions; today complex super-scalar implementations execute multiple instructions each clock cycle, parallel and out-of-order, keeping up the illusion of sequential processing. Recent research demonstrates though that augmenting the paradigm of the Von-Neumann architecture with a few established concepts from data-flow and task-parallel programming, will create both a credible and intuitive parallel architecture enabling notable compute efficiency improvement while retaining compatibility with the current mainstream. This talk will thus review the current state of the processor industry and after highlighting why we are running out of steam in ILP; I will outline the task-superscalar programming model as the “ring to rule them all” and provide insights as to how this architecture can take advantage of special HW acceleration for data-flow management and provide support for efficient neuromorphic computing.

April 12, 2017

  • Speaker: Aapo Hyvarinen
  • Time: 12:00
  • Affiliation: Gatsby/UCL
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

May 24, 2017

  • Speaker: Pierre Sermanet
  • Time: 12:00
  • Affiliation: Google Brain
  • Host: Brian
  • Status: confirmed
  • Title:
  • Abstract:

May 30, 2017

  • Speaker: Heiko Schutt
  • Time: 12:00
  • Affiliation: Univ Tubingen
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

June 7, 2017

  • Speaker: Saurabh Gupta
  • Time: 12:00
  • Affiliation: UC Berkeley
  • Host: Spencer
  • Status: confirmed
  • Title: Cognitive Mapping and Planning for Visual Navigation
  • Abstract: We introduce a novel neural architecture for navigation in novel environments that learns a cognitive map from first person viewpoints and plans a sequence of actions towards goals in the environment. The Cognitive Mapper and Planner (CMP) is based on two key ideas: a) a unified joint architecture for mapping and planning, such that the mapping is driven by the needs of the planner, and b) a spatial memory with the ability to plan given an incomplete set of observations about the world. CMP constructs a top-down belief map of the world and applies a differentiable neural net planner to produce the next action at each time step. The accumulated belief of the world enables the agent to track visited regions of the environment. Our experiments demonstrate that CMP outperforms both reactive strategies and standard memory-based architectures and performs well even in novel environments. Furthermore, we show that CMP can also achieve semantically specified goals, such as “go to a chair”. This is joint work with James Davidson, Sergey Levine, Rahul Sukthankar and Jitendra Malik.

June 14, 2017

  • Speaker: Madhow
  • Time: 12:00
  • Affiliation: UCSB
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

June 19, 2017

  • Speaker: Tali Tishby
  • Time: 12:00
  • Affiliation: Hebrew Univ.
  • Host: Bruno/Daniel Reichman
  • Status: confirmed
  • Title:
  • Abstract:

June 21, 2017

  • Speaker: Jasmine Collins
  • Time: 12:00
  • Affiliation: Google
  • Host: Brian
  • Status: confirmed
  • Title: Capacity and Trainability in Recurrent Neural Networks
  • Abstract: Two potential bottlenecks on the expressiveness of recurrent neural networks (RNNs) are their ability to store information about the task in their parameters, and to store information about the input history in their units. We show experimentally that all common RNN architectures achieve nearly the same per-task and per-unit capacity bounds with careful training, for a variety of tasks and stacking depths. They can store an amount of task information which is linear in the number of parameters, and is approximately 5 bits per parameter. They can additionally store approximately one real number from their input history per hidden unit. We further find that for several tasks it is the per-task parameter capacity bound that determines performance. These results suggest that many previous results comparing RNN architectures are driven primarily by differences in training effectiveness, rather than differences in capacity. Supporting this observation, we compare training difficulty for several architectures, and show that vanilla RNNs are far more difficult to train, yet have slightly higher capacity. Finally, we propose two novel RNN architectures, one of which is easier to train than the LSTM or GRU for deeply stacked architectures.

2015/16 academic year

July 21, 2015

  • Speaker: Felix Effenberger
  • Affiliation:
  • Host: Chris H.
  • Status: confirmed
  • Title:
  • Abstract

July 22, 2015

  • Speaker: Lav Varshney
  • Affiliation: Urbana-Champaign
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract

July 23, 2015

  • Speaker: Xuemin Wei
  • Affiliation: Univ Penn
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract

July 29, 2015

  • Speaker: Gonzalo Otazu
  • Affiliation: Cold Spring Harbor Laboratory, Long Island, NY
  • Host: Mike D
  • Status: Confirmed
  • Title: The Role of Cortical Feedback in Olfactory Processing
  • Abstract: The olfactory bulb receives rich glutamatergic projections from the piriform cortex. However, the dynamics and importance of these feedback signals remain unknown. In the first part of this talk, I will present data from multiphoton calcium imaging of cortical feedback in the olfactory bulb of awake mice. Responses of feedback boutons were sparse, odor specific, and often outlasted stimuli by several seconds. Odor presentation either enhanced or suppressed the activity of boutons. However, any given bouton responded with stereotypic polarity across multiple odors, preferring either enhancement or suppression. Inactivation of piriform cortex increased odor responsiveness and pairwise similarity of mitral cells but had little impact on tufted cells. We propose that cortical feedback differentially impacts these two output channels of the bulb by specifically decorrelating mitral cell responses to enable odor separation. In the second part of the talk I will introduce a computational model of odor identification in natural scenes that uses cortical feedback and how the model predictions match our experimental data.

Aug 19, 2015

  • Speaker: Wujie Zhang
  • Affiliation: Columbia
  • Host: Bruno/Michael Yartsev
  • Status: Confirmed
  • Title:
  • Abstract:

Sept 2, 2015

  • Speaker: Jeremy Maitin-Shepard
  • Affiliation: Computer Science, UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title: Combinatorial Energy Learning for Image Segmentation
  • Abstract: Recent advances in volume electron microscopy make it possible to image neuronal tissue volumes containining hundreds of thousands of neurons at sufficient resolution to discern even the finest neuronal processes. Accurate 3-D segmentation of these processes densely packed in these petavoxel-scale volumes is the key bottleneck in reconstructing large-scale neural circuits.

Sept 8, 2015

  • Speaker: Jennifer Hasler
  • Affiliation: Georgia Tech
  • Host: Bruno/Mika
  • Status: confirmed
  • Title:
  • Abstract:

October 29, 2015

  • Speaker: Garrett Kenyon
  • Affiliation: Los Alamos National Laboratory
  • Host: Dylan
  • Status: confirmed
  • Title: A Deconvolutional Competitive Algorithm (DCA)
  • Abstract: The Locally Competitive Algorithm (LCA) is a neurally-plausible sparse solver based on lateral inhibition between leaky integrator neurons. LCA accounts for many linear and nonlinear response properties of V1 simple cells, including end-stopping and contrast-invariant orientation tuning. Here, we describe a convolutional implementation of LCA in which a column of feature vectors is replicated with a stride that is much smaller than the diameter of the corresponding kernels, allowing the construction of dictionaries that are many times more overcomplete than without replication. Using a local Hebbian rule that minimizes sparse reconstruction error, we are able to learn representations from unlabeled imagery, including monocular and stereo video streams, that in some cases support near state-of-the-art performance on object detection, action classification and depth estimation tasks, with a simple linear classifier. We further describe a scalable approach to building a hierarchy of convolutional LCA layers, which we call a Deconvolutional Competitive Algorithm (DCA). All layers in a DCA are trained simultaneously and all layers contribute to a single image reconstruction, with each layer deconvolving its representation through all lower layers back to the image plane. We show that a 3-layer DCA trained on short video clips obtained from hand-held cameras exhibits a clear segregation of image content, with features in the top layer reconstructing large-scale structures while features in the middle and bottom layers reconstruct progressively finer details. Lastly, we describe PetaVision, an open source, cloud-friendly, high-performance neural simulation toolbox that was used to perform the numerical studies presented here.

Nov 18, 2015

  • Speaker: Hillel Adesnik
  • Affiliation: Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title:

Nov 17, 2015

  • Speaker: Manuel Lopez
  • Affiliation:
  • Host: Fritz
  • Status: confirmed
  • Title:
  • Abstract

Dec 2, 2015

  • Speaker: Steven Brumby
  • Affiliation: Descartes Labs
  • Host: Dylan
  • Status: confirmed
  • Title: Seeing the Earth in the Cloud
  • Abstract: The proliferation of transistors has increased the performance of computing systems by over a factor of a million in the past 30 years, and is also dramatically increasing the amount of data in existence, driving improvements in sensor, communication and storage technology. Multi-decadal Earth and planetary remote sensing global datasets at the petabyte scale (8×10^15 bits) are now available in commercial clouds, and new satellite constellations are planning to generate petabytes of images per year, providing daily global coverage at a few meters per pixel. Cloud storage with adjacent high-bandwidth compute, combined with recent advances in neuroscience-inspired machine learning for computer vision, is enabling understanding of the world at a scale and at a level of granularity never before feasible. We report here on a computation processing over a petabyte of compressed raw data from 2.8 quadrillion pixels (2.8 petapixels) acquired by the US Landsat and MODIS programs over the past 40 years. Using commodity cloud computing resources, we convert the imagery to a calibrated, georeferenced, multiresolution tiled format suited for machine-learning analysis. We believe ours is the first application to process, in less than a day, on generally available resources, over a petabyte of scientific image data. We report on work using this reprocessed dataset for experiments demonstrating country-scale food production monitoring, an indicator for famine early warning.

Dec 14, 2015

  • Speaker: Bill Softky
  • Affiliation:
  • Host: Bruno
  • Status: confirmed
  • Title: Screen addition - informal Redwood group seminar

Dec 16, 2015

  • Speaker: Mike Landy
  • Affiliation: Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title:

Feb 3, 2016

  • Speaker: Ping-Chen Huang
  • Affiliation: Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title:

Feb 17, 2016

  • Speaker: Andrew Saxe
  • Affiliation: Harvard
  • Host: Jesse
  • Status: confirmed
  • Title: Hallmarks of Deep Learning in the Brain

Feb 24, 2016

  • Speaker: Miguel Perpinan
  • Affiliation: UC Merced
  • Host: Bruno
  • Status: confirmed
  • Title: TBA

Mar 1, 2016

  • Speaker: Leon Gatys
  • Affiliation: Univ Tubingen
  • Host: Bruno
  • Status: confirmed
  • Title:

Mar 7-9, 2016

  • NICE workshop

Mar 9, 2016

  • Tatiana Engel - HWNI job talk at 12:00

Mar 16, 2016

  • Talia Lerner - HWNI job talk at 12:00

Mar 23, 2016

  • Speaker: Kwabena Boahen
  • Affiliation: Stanford
  • Host: Max Kanwal/Bruno
  • Status: confirmed
  • Title:

April 11, 2016

  • Speaker: Hao Su
  • Time: at 12:00
  • Affiliation: Geometric Computing Lab and Artificial Intelligence Lab, Stanford University
  • Host: Yubei
  • Status: confirmed
  • Title: [Tentative] Joint Analysis for 2D Images and 3D shapes
  • Abstract: Coming

May 04, 2016

  • Speaker: Zhengya Zhang
  • Time: 12:00
  • Affiliation: Electrical Engineering and Computer Science, University of Michigan
  • Host: Dylan, Bruno
  • Status: Confirmed
  • Title: Sparse Coding ASIC Chips for Feature Extraction and Classification
  • Abstract: Hardware-based computer vision accelerators will be an essential part of future mobile and autonomous devices to meet the low power and real-time processing requirement. To realize a high energy efficiency and high throughput, the accelerator architecture can be massively parallelized and tailored to the underlying algorithms, which is an advantage over software-based solutions and general-purpose hardware. In this talk, I will present three application-specific integrated circuit (ASIC) chips that implement the sparse and independent local network (SAILnet) algorithm and the locally competitive algorithm (LCA) for feature extraction and classification. Two of the chips were designed using an array of leaky integrate-and-fire neurons. Sparse activations of the neurons make possible an efficient grid-ring architecture to deliver an image processing throughput of 1 G pixel/s using only 200 mW. The third chip was designed using a convolution approach. Sparsity is again an important factor that enabled the use of sparse convolvers to achieve an effective performance of 900 G operations/s using less than 150 mW.

May 18, 2016

  • Speaker: Melanie Mitchell
  • Affiliation: Portland State University and Santa Fe Institute
  • Host: Dylan
  • Time: 12:00
  • Status: confirmed
  • Title: Using Analogy to Recognize Visual Situations
  • Abstract: Enabling computers to recognize abstract visual situations remains a hard open problems in artificial intelligence. No machine vision system comes close to matching human ability at identifying the contents of images or visual scenes, or at recognizing abstract similarity between different scenes, even though such abilities pervade human cognition. In this talk I will describe my research on getting computers to flexibly recognize visual situations by integrating low-level vision algorithms with an agent-based model of higher-level concepts and analogy-making.
  • Bio: Melanie Mitchell is Professor of Computer Science at Portland State University, and External Professor and Member of the Science Board at the Santa Fe Institute. She received a Ph.D. in Computer Science from the University of Michigan. Her dissertation, in collaboration with her advisor Douglas Hofstadter, was the development of Copycat, a computer program that makes analogies. She is the author or editor of five books and over 70 scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her most recent book, Complexity: A Guided Tour (Oxford, 2009), won the 2010 Phi Beta Kappa Science Book Award. It was also named by Amazon.com as one of the ten best science books of 2009, and was longlisted for the Royal Society's 2010 book prize. Melanie directs the Santa Fe Institute's Complexity Explorer project, which offers online courses and other educational resources related to the field of complex systems.

June 8, 2016

  • Speaker: Kris Bouchard
  • Time: 12:00
  • Affiliation: LBNL
  • Host: Fritz
  • Status: Confirmed
  • Title: The union of intersections method
  • Abstract:

June 15, 2016

  • Speaker: James Blackmon
  • Time: 12:00
  • Affiliation: San Francisco State University
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract:

2014/15 academic year

2 July 2014

  • Speaker: Kelly Clancy
  • Affiliation: Feldman lab
  • Host: Guy
  • Status: confirmed
  • Title: Volitional control of neural assemblies in L2/3 of motor and somotosensory cortices
  • Abstract: I'll be talking about a joint effort between the Feldman, Carmena and Costa labs to study abstract task learning by small neuronal assemblies in intact networks. Brain-machine interfaces are a unique tool for studying learning, thanks to the direct mapping between neural activity and reward. We trained mice to operantly control an auditory cursor using spike-related calcium signals recorded with two-photon imaging in motor and somatosensory cortex, allowing us to assess the effects of learning with great spatial detail. Mice rapidly learned to modulate activity in layer 2/3 neurons, evident both across and within sessions. Interestingly, even neurons that exhibited very low or no spontaneous spiking--so-called 'silent' cells that are invisible to electrode-based techniques--could be behaviorally up-modulated for task performance. Learning was accompanied by modifications of firing correlations in spatially localized networks at fine scales.

23 July 2014

  • Speaker: Gautam Agarwal
  • Affiliation: UC Berkeley/Champalimaud
  • Host: Friedrich Sommer
  • Status: confirmed
  • Title: Unsolved Mysteries of Hippocampal Dynamics
  • Abstract: Two radically different forms of electrical activity can be observed in the rat hippocampus: spikes and local field potentials (LFPs). Hippocampal pyramidal neurons are mostly silent, yet spike vigorously as the subject encounters particular locations in its environment. In contrast, LFPs appear to lack place-selectivity, persisting regardless of the rat's location. Recently, we found that in fact one can recover from LFPs the spatial information present in the underlying neuronal population, showing how these two signals are two sides of the same coin. Nonetheless, there are many aspects of the LFP that remain mysterious. I will review several observations and explanatory gaps which await further study. These include: the relationship of LFP patterns to anatomy; the elusive structure of gamma waves; complex forms of cross-frequency coupling; variations in LFP patterns seen when the rat explores its world more freely; reconciling the memory and navigation roles of the hippocampus.

6 Aug 2014

  • Speaker: Georg Martius
  • Affiliation: Max Planck Institute, Leipzig
  • Host: Fritz Sommer
  • Status: confirmed
  • Title: Information driven self-organization of robotic behavior
  • Abstract: Autonomy is a puzzling phenomenon in nature and a major challenge in the world of artifacts. A key feature of autonomy in both natural and

artificial systems is seen in the ability for independent exploration. In animals and humans, the ability to modify its own pattern of activity is not only an indispensable trait for adaptation and survival in new situations, it also provides a learning system with novel information for improving its cognitive capabilities, and it is essential for development. Efficient exploration in high-dimensional spaces is a major challenge in building learning systems. We propose to implement the exploration as a deterministic law derived from maximizing an information quantity. More specifically we use the predictive information of the sensor process (of a robot) to obtain an update rule (exploration dynamics) of the controller parameters. To be adequate in robotics application the non-stationary nature of the underlying time-series have to be taken into account, which we do by proposing the time-local predictive information (TiPI). Importantly the exploration dynamics is derived analytically and by this we link information theory and dynamical systems. Without a random component the change in the parameters is deterministically given as a function of the states in a certain time window. For an embodied system this means in particular that constraints, responses and current knowledge of the dynamical interaction with the environment can directly be used to advance further exploration. Randomness is replaced with spontaneity which we demonstrate to restrict the search space automatically to the physically relevant dimensions. Its effectiveness will be presented with various experiments on high-dimensional robotic system and we argue that this is a promising way to avoid the curse of dimensionality. This talk describes joint work with Ralf Der and Nihat Ay.

15 Aug 2014

  • Speaker: Juergen Schmidhuber
  • Affiliation: IDSIA, Switzerland
  • Host: James/Shariq
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

2 Sept 2014

  • Speaker: Oriol Vinyals
  • Affliciation: Google
  • Host: Guy
  • Status: confirmed
  • Title: Machine Translation with Long-Short Term Memory Models
  • Abstract: Supervised large deep neural networks achieved good results on speech recognition and computer vision. Although very successful, deep neural networks can only be applied to problems whose inputs and outputs can be conveniently encoded with vectors of fixed dimensionality - but cannot easily be applied to problems whose inputs and outputs are sequences. In this work, we show how to use a large deep Long Short-Term Memory (LSTM) model to solve domain-agnostic supervised sequence to sequence problems with minimal manual engineering. Our model uses one LSTM to map the input sequence to a vector of a fixed dimensionality and another LSTM to map the vector to the output sequence. We applied our model to a machine translation task and achieved encouraging results. On the WMT'14 translation task from English to French, a model combination of 6 large LSTMs achieves a BLEU score of 32.3 (where a larger score is better). For comparison, a strong standard statistical MT baseline achieves a BLEU score of 33.3. When we use our LSTM to rescore the n-best lists produced by the SMT baseline, we achieve a BLEU score of 36.3, which is a new state of the art. This is joint work with Ilya Sutskever and Quoc Le.

19 Sept 2014

  • Speaker: Gary Marcus
  • Affiliation: NYU
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

24 Sept 2014

  • Speaker: Alyosha Efros
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract:

30 Sep 2014

  • Speaker: Alejandro Bujan
  • Affiliation:
  • Host: Fritz
  • Status: confirmed
  • Title: Propagation and variability of evoked responses: the role of correlated inputs and oscillations
  • Abstract:

8 Oct 2014

  • Speaker: Siyu Zhang
  • Affiliation: UC Berkeley
  • Host: Karl
  • Status: confirmed
  • Title: Long-range and local circuits for top-down modulation of visual cortical processing
  • Abstract:

15 Oct 2014

  • Speaker: Tamara Broderick
  • Affiliation: UC Berkeley
  • Host: Yvonne/James
  • Status: confirmed
  • Title: Feature allocations, probability functions, and paintboxes
  • Abstract: Clustering involves placing entities into mutually exclusive categories. We wish to relax the requirement of mutual exclusivity, allowing objects to belong simultaneously to multiple classes, a formulation that we refer to as "feature allocation." The first step is a theoretical one. In the case of clustering the class of probability distributions over exchangeable partitions of a dataset has been characterized (via exchangeable partition probability functions and the Kingman paintbox). These characterizations support an elegant nonparametric Bayesian framework for clustering in which the number of clusters is not assumed to be known a priori. We establish an analogous characterization for feature allocation; we define notions of "exchangeable feature probability functions" and "feature paintboxes" that lead to a Bayesian framework that does not require the number of features to be fixed a priori. The second step is a computational one. Rather than appealing to Markov chain Monte Carlo for Bayesian inference, we develop a method to transform Bayesian methods for feature allocation (and other latent structure problems) into optimization problems with objective functions analogous to K-means in the clustering setting. These yield approximations to Bayesian inference that are scalable to large inference problems.

29 Oct 2014

  • Speaker: Ken Nakayama
  • Affiliation: Harvard
  • Host: Bruno
  • Status: Confirmed
  • Title: Topics in higher level visuo-motor control
  • Abstract: TBA

5 Nov 2014 - **BVLC retreat**

20 Nov 2014

  • Speaker: Haruo Hasoya
  • Affiliation: ATR Institute, Japan
  • Host: Bruno
  • Status: tentative
  • Title: TBA
  • Abstract:

9 Dec 2014

  • Speaker: Dirk DeRidder
  • Affiliation: Dundedin School of Medicine, University of Otago, New Zealand
  • Host: Bruno/Walter Freeman
  • Status: confirmed
  • Title: The Bayesian brain, phantom percepts and brain implants
  • Abstract: TBA

January 14, 2015

  • Speaker: Kevin O'regan
  • Affiliation: CNRS - Université Paris Descartes
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

January 21, 2015

  • Speaker: Adrienne Fairhall
  • Affiliation: University of Washington
  • Host: Mike Schachter
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

January 26, 2015

  • Speaker: Abraham Peled
  • Affiliation: Mental Health Center, 'Technion' Israel Institute of Technology
  • Host: Bruno
  • Status: confirmed
  • Title: Clinical Brain Profiling: A Neuro-Computational psychiatry
  • Abstract: TBA

January 28, 2015

  • Speaker: Rich Ivry
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title: Embodied Decision Making: System interactions in sensorimotor adaptation and reinforcement learning
  • Abstract:

February 11, 2015

  • Speaker: Mark Lescroart
  • Affiliation: UC Berkeley
  • Host: Karl
  • Status: tentative
  • Title:
  • Abstract:

February 25, 2015

  • Speaker: Steve Chase
  • Affiliation: CMU
  • Host: Bruno
  • Status: confirmed
  • Title: Joint Redwood/CNEP seminar
  • Abstract:

March 3, 2015

  • Speaker: Andreas Herz
  • Affiliation: Bernstein Center, Munich
  • Host: Bruno/Fritz
  • Status: confirmed
  • Title:
  • Abstract:

March 3, 2015 - 4:00

  • Speaker: James Cooke
  • Affiliation: Oxford
  • Host: Mike Deweese
  • Status: confirmed
  • Title: Neural Circuitry Underlying Contrast Gain Control in Primary Auditory Cortex
  • Abstract:

March 4, 2015

  • Speaker: Bill Sprague
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title: V1 disparity tuning and the statistics of disparity in natural viewing
  • Abstract:

March 11, 2015

  • Speaker: Jozsef Fiser
  • Affiliation: Central European University
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

April 1, 2015

  • Speaker: Saeed Saremi
  • Affiliation: Salk Inst
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

April 15, 2015

  • Speaker: Zahra M. Aghajan
  • Affiliation: UCLA
  • Host: Fritz
  • Status: confirmed
  • Title: Hippocampal Activity in Real and Virtual Environments
  • Abstract:

May 7, 2015

  • Speaker: Santani Teng
  • Affiliation: MIT
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract:

May 13, 2015

  • Speaker: Harri Valpola
  • Affiliation: ZenRobotics
  • Host: Brian
  • Status: Tentative
  • Title: TBA
  • Abstract

June 24, 2015

  • Speaker: Kendrick Kay
  • Affiliation: Department of Psychology, Washington University in St. Louis
  • Host: Karl
  • Status: Confirmed
  • Title: Using functional neuroimaging to reveal the computations performed by the human visual system
  • Abstract

Visual perception is the result of a complex set of computational transformations performed by neurons in the visual system. Functional magnetic resonance imaging (fMRI) is ideally suited for identifying these transformations, given its excellent spatial resolution and ability to monitor activity across the numerous areas of visual cortex. In this talk, I will review past research in which we used fMRI to develop increasingly accurate models of the stimulus transformations occurring in early and intermediate visual areas. I will then describe recent research in which we successfully extend this approach to high-level visual areas involved in perception of visual categories (e.g. faces) and demonstrate how top-down attention modulates bottom-up stimulus representations. Finally, I will discuss ongoing research targeting regions of ventral temporal cortex that are essential for skilled reading. Our model-based approach, combined with high-field laminar measurements, is expected to provide an integrated picture of how bottom-up stimulus transformations and top-down cognitive factors interact to support rapid and accurate word recognition. Development of quantitative models and associated experimental paradigms may help us understand and diagnose impairments in neural processing that underlie visual disorders such as dyslexia and prosopagnosia.

2013/14 academic year

9 Oct 2013

  • Speaker: Ekaterina Brocke
  • Affiliation: KTH University, Stockholm, Sweden
  • Host: Tony
  • Status: confirmed
  • Title: Multiscale modeling in Neuroscience: first steps towards multiscale co-simulation tool development.
  • Abstract: Multiscale modeling/simulations attracts an increasing number of neuroscientists to study how different levels of organization (networks of neurons, cellular/subcellular levels) interact with each other across multiple scales, space and time, to mediate different brain functions. Different scales are usually described by different physical and mathematical formalisms thus making it non trivial to perform the integration. In this talk, I will discuss key phenomena in Neuroscience that can be addressed using subcellular/cellular models, possible approaches to perform multiscale simulations in particular a co-simulation method. I will also introduce several multiscale "toy" models of cellular/subcellular levels that were developed with the aim to understand numerical and technical problems which might appear during the co-simulation. And finally, the first steps made towards multiscale co-simulation tool development will be presented during the talk.

29 Oct 2013 - note: 4:00

  • Speaker: Mitya Chkolovskii
  • Affiliation: HHMI/Janelia Farm
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

30 Oct 2013

  • Speaker: Ilya Nemanman
  • Affiliation: Emory University, Departments of Physics and Biology
  • Host: Mike DeWeese
  • Status: confirmed
  • Title: Large N in neural data -- expecting the unexpected.
  • Abstract: Recently it has become possible to directly measure simultaneous collective states of many biological components, such as neural activities, genetic sequences, or gene expression profiles. These data are revealing striking results, suggesting, for example, that biological systems are tuned to criticality, and that effective models of these systems based on only pairwise interactions among constitutive components provide surprisingly good fits to the data. We will explore a handful of simplified theoretical models, largely focusing on statistical mechanics of Ising spins, that suggest plausible explanations for these observations. Specifically, I will argue that, at least in certain contexts, these intriguing observations should be expected in multivariate interacting data in the thermodynamic limit of many interacting components.

31 Oct 2013

  • Speaker: Oriol Vinyals
  • Affiliation: UC Berkeley
  • Host: Bruno/Brian
  • Status: confirmed
  • Title: Beyond Deep Learning: Scalable Methods and Models for Learning
  • Abstract: In this talk I will briefly describe several techniques I explored in my thesis that improve how to efficiently model signal representations and learn useful information from them. The building block of my dissertation is based on machine learning approaches to classification, where a (typically non-linear) function is learned from labeled examples to map from signals to some useful information (e.g. an object class present an image, or a word present in an acoustic signal). One of the motivating factors of my work has been advances in neural networks in deep architectures (which has led to the terminology "deep learning"), and that has shown state-of-the-art performance in acoustic modeling and object recognition -- the main focus of this thesis. In my work, I have contributed to both the learning (or training) of such architectures through faster and robust optimization techniques, and also to the simplification of the deep architecture model to an approach that is simple to optimize. Furthermore, I derived a theoretical bound showing a fundamental limitation of shallow architectures based on sparse coding (which can be seen as a one hidden layer neural network), thus justifying the need for deeper architectures, while also empirically verifying these architectural choices on speech recognition. Many of my contributions have been used in a wide variety of applications, products and datasets as a result of many collaborations within ICSI and Berkeley, but also at Microsoft Research and Google Research.

6 Nov 2013

  • Speaker: Garrett T. Kenyon
  • Affiliation: Los Alamos National Laboratory, The New Mexico Consortium
  • Host: Dylan Paiton
  • Status: Confirmed
  • Title: Using Locally Competitive Algorithms to Model Top-Down and Lateral Interactions
  • Abstract: Cortical connections consist of feedforward, feedback and lateral pathways. Infragranular layers project down the cortical hierarchy to both supra- and infragranular layers at the previous processing level, while the neurons in supragranular layers are linked by extensive long-range lateral projections that cross multiple cortical columns. However, most functional models of visual cortex only account for feedforward connections. Additionally, most models of visual cortex fail to account both for the thalamic projections to non-striate areas and the reciprocal connections from extrastriate areas back to the thalamus. In this talk, I will describe how a modified Locally Competitive Algorithm (LCA; Rozell et al, Neural Comp, 2008) can be used as a unifying framework for exploring the role of top-down and lateral cortical pathways within the context of deep, sparse, generative models. I will also describe an open source software tool called PetaVision that can be used to implement and execute hierarchical LCA-based models on multi-core, multi-node computer platforms without requiring specific knowledge of parallel-programming constructs.

14 Nov 2013 (note: Thursday), ***12:30pm***

  • Speaker: Geoffrey J Goodhill
  • Affiliation: Queensland Brain Institute and School of Mathematics and Physics, The University of Queensland, Australia
  • Host: Mike DeWeese
  • Status: Confirmed
  • Title: Computational principles of neural wiring development
  • Abstract: Brain function depends on precise patterns of neural wiring. An axon navigating to its target must make guidance decisions based on noisy information from molecular cues in its environment. I will describe a combination of experimental and computational work showing that (1) axons may act as ideal observers when sensing chemotactic gradients, (2) the complex influence of calcium and cAMP levels on guidance decisions can be predicted mathematically, (3) the morphology of growth cones at the axonal tip can be understood in terms of just a few eigenshapes, and remarkably these shapes oscillate in time with periods ranging from minutes to hours. Together this work may shed light on how neural wiring goes wrong in some developmental brain disorders, and how best to promote appropriate regrowth of axons after injury.

4 Dec 2013

  • Speaker: Zhenwen Dai
  • Affiliation: FIAS, Goethe University Frankfurt, Germany.
  • Host: Georgios Exarchakis
  • Status: Confirmed
  • Title: What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach
  • Abstract: We study optimal image encoding based on a generative approach with non-linear feature combinations and explicit position encoding. By far most approaches to unsupervised learning of visual features, such as sparse coding or ICA, account for translations by representing the same features at different positions. Some earlier models used a separate encoding of features and their positions to facilitate invariant data encoding and recognition. All probabilistic generative models with explicit position encoding have so far assumed a linear superposition of components to encode image patches. Here, we for the first time apply a model with non-linear feature superposition and explicit position encoding for patches. By avoiding linear superpositions, the studied model represents a closer match to component occlusions which are ubiquitous in natural images. In order to account for occlusions, the non-linear model encodes patches qualitatively very different from linear models by using component representations separated into mask and feature parameters. We first investigated encodings learned by the model using artificial data with mutually occluding components. We find that the model extracts the components, and that it can correctly identify the occlusive components with the hidden variables of the model. On natural image patches, the model learns component masks and features for typical image components. By using reverse correlation, we estimate the receptive fields associated with the model’s hidden units. We find many Gabor-like or globular receptive fields as well as fields sensitive to more complex structures. Our results show that probabilistic models that capture occlusions and invariances can be trained efficiently on image patches, and that the resulting encoding represents an alternative model for the neural encoding of images in the primary visual cortex.

11 Dec 2013

  • Speaker: Kai Siedenburg
  • Affiliation: UC Davis, Petr Janata's Lab.
  • Host: Jesse Engel
  • Status: Confirmed
  • Title: Characterizing Short-Term Memory for Musical Timbre
  • Abstract: Short-term memory is a cognitive faculty central for the apprehension of music and speech. Only little is known, however, about memory for musical timbre despite its“sisterhood”with speech; after all, speech can be regarded as sequencing of vocal timbre. Past research has isolated many characteristic effects of verbal memory. Are these also in play for non-vocal timbre sequences? We studied this question by considering short-term memory for serial order. Using timbres and dissimilarity data from McAdams et al. (Psych. Research, 1995), we employed a same/different discrimination paradigm. Experiment 1 (N = 30 MU + 30 nonMU) revealed effects of sequence length and timbral dissimilarity of items, as well as an interaction of musical training and pitch variability: in contrast to musicians, non-musicians' performance was impaired by simultaneous changes in pitch, compared to a constant pitch baseline. Experiment 2 (N = 22) studied whether musicians' memory for timbre sequences was independent of pitch irrespective of the degree of complexity of pitch progressions. Comparing sequences with pitch changing within and across standard and comparison to a constant pitch baseline, performance was now clearly impaired for the variable pitch condition. Experiment 3 (N = 22) showed primacy and recency effects for musicians, and reproduced a positive effect of timbral heterogeneity of sequences. Our findings demonstrate the presence of hallmark effects of verbal memory such as similarity, word length, primacy/recency for the domain of non-vocal timbre, and suggest that memory for speech and non- vocal timbre sequences might to a large extent share underlying mechanisms.

12 Dec 2013

  • Speaker: Matthias Bethge
  • Affiliation: University of Tubingen
  • Host: Bruno
  • Status: tentative
  • Title: TBA
  • Abstract: TBA

22 Jan 2014

  • Speaker: Thomas Martinetz
  • Affiliation: Univ Luebeck
  • Host: Bruno/Fritz
  • Status: confirmed
  • Title: Orthogonal Sparse Coding and Sensing
  • Abstract: Sparse Coding has been a very successful concept since many natural signals have the property of being sparse in some dictionary (basis). Some natural signals are even sparse in an orthogonal basis, most prominently natural images. They are sparse in a respective wavelet transform. An encoding in an orthogonal basis has a number of advantages,.e.g., finding the optimal coding coefficients is simply a projection instead of being NP-hard.

Given some data, we want to find the orthogonal basis which provides the sparsest code. This problem can be seen as a generalization of Principal Component Analysis. We present an algorithm, Orthogonal Sparse Coding (OSC), which is able to find this basis very robustly. On natural images, it compresses on the level of JPEG, but can adapt to arbitrary and special data sets and achieve significant improvements. With the property of being sparse in some orthogonal basis, we show how signals can be sensed very efficiently in an hierarchical manner with at most k log D sensing actions. This hierarchical sensing might relate to the way we sense the world, with interesting applications in active vision.

29 Jan 2014

  • Speaker: David Klein
  • Affiliation: Audience
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

5 Feb 2014 (leave open for Barth/Martinetz seminar)

12 Feb 2014

  • Speaker: Ilya Sutskever
  • Affiliation: Google
  • Host: Zayd
  • Status: confirmed
  • Title: Continuous vector representations for machine translation
  • Abstract: Dictionaries and phrase tables are the basis of modern statistical machine translation systems. I will present a method that can automate the process of generating and extending dictionaries and phrase tables. Our method can translate missing word and phrase entries by learning language structures using large monolingual data, and by mapping between the languages using a small bilingual dataset. It uses distributed representations of words and learns a linear mapping between vector spaces of languages. Despite its simplicity, our method is surprisingly effective: we can achieve almost 90% precision@5 for translation of words between English and Spanish. This method makes little assumption about the languages, so it can be used to extend and refine dictionaries and translation tables for any language pairs. Joint work with Tomas Mikolov and Quoc Le.

25 Feb 2014

  • Speaker: Alexander Terekhov
  • Affiliation: CNRS - Université Paris Descartes
  • Host: Bruno
  • Status: confirmed
  • Title: Constructing space: how a naive agent can learn spatial relationships by observing sensorimotor contingencies
  • Abstract:

12 March 2014

  • Speaker: Carlos Portera-Cailliau
  • Affiliation: UCLA
  • Host: Mike
  • Status: confirmed
  • Title: Circuit defects in the neocortex of Fmr1 knockout mice
  • Abstract: TBA

19 March 2014

  • Speaker: Dean Buonomano
  • Affiliation: UCLA
  • Host: Mike
  • Status: confirmed
  • Title: State-dependent Networks: Timing and Computations Based on Neural Dynamics and Short-term Plasticity
  • Abstract: The brain’s ability to seamlessly assimilate and process spatial and temporal information is critical to most behaviors, from understanding speech to playing the piano. Indeed, because the brain evolved to navigate a dynamic world, timing and temporal processing represent a fundamental computation. We have proposed that timing and the processing of temporal information emerges from the interaction between incoming stimuli and the internal state of neural networks. The internal state, is defined not only by ongoing activity (the active state) but by time-varying synaptic properties, such as short-term synaptic plasticity (the hidden state). One prediction of this hypothesis is that timing is a general property of cortical circuits. We provide evidence in this direction by demonstrating that in vitro cortical networks can “learn” simple temporal patterns. Finally, previous theoretical studies have suggested that recurrent networks capable of self-perpetuating activity hold significant computational potential. However, harnessing the computational potential of these networks has been hampered by the fact that such networks are chaotic. We show that it is possible to “tame” chaos through recurrent plasticity, and create a novel and powerful general framework for how cortical circuits compute.

26 March 2014

  • Speaker: Robert G. Smith
  • Affiliation: University of Pennsylvania
  • Host: Mike S
  • Status: confirmed
  • Title: Role of Dendritic Computation in the Direction-Selective Circuit of Retina
  • Abstract: The retina utilizes a variety of signal processing mechanisms to compute direction from image motion. The computation is accomplished by a circuit that includes starburst amacrine cells (SBACs), which are GABAergic neurons presynaptic to direction-selective ganglion cells (DSGCs). SBACs are symmetric neurons with several branched dendrites radiating out from the soma. When a stimulus moving back and forth along a SBAC dendrite sequentially activates synaptic inputs, larger post-synaptic potentials (PSPs) are produced in the dendritic tips when the stimulus moves outwards from the soma. The directional difference in EPSP amplitude is further amplified near the dendritic tips by voltage-gated channels to produce directional release of GABA. Reciprocal inhibition between adjacent SBACs may also amplify directional release. Directional signals in the independent SBAC branches are preserved because each dendrite makes selective contacts only with DSGCs of the appropriate preferred-direction. Directional signals are further enhanced within the dendritic arbor of the DSGC, which essentially comprises an array of distinct dendritic compartments. Each of these dendritic compartments locally sum excitatory and inhibitory inputs, amplifies them with voltage-gated channels, and generates spikes that propagate to the axon via the soma. Overall, the computation of direction in the retina is performed by several local dendritic mechanisms both presynaptic and postsynaptic, with the result that directional responses are robust over a broad range of stimuli.

16 April 2014

  • Speaker: David Pfau
  • Affiliation: Columbia
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

22 April 2014 *Tuesday*

  • Speaker: Jochen Braun
  • Affiliation: Otto-von-Guericke University, Magdeburg
  • Host: Bruno
  • Status: confirmed
  • Title: Dynamics of visual perception and collective neural activity
  • Abstract:

29 April 2014

  • Speaker: Guiseppe Vitiello
  • Affiliation: University of Salerno
  • Host: Fritz/Walter Freeman
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

30 April 2014

  • Speaker: Masataka Watanabe
  • Affiliation: University of Tokyo / Max Planck Institute for Biological Cybernetics
  • Host: Gautam Agarwal
  • Status: confirmed
  • Title: Turing Test for Machine Consciousness and the Chaotic Spatiotemporal Fluctuation Hypothesis
  • Abstract: I propose an experimental method to test various hypotheses on consciousness. Inspired by Sperry's observation that split-brain patients possess two independent streams of consciousness, the idea is to implement candidate neural mechanisms of visual consciousness onto an artificial cortical hemisphere and test whether subjective experience is evoked in the device's visual hemifield. In contrast to modern neurosynthetic devices, I show that mimicking interhemispheric connectivity assures that authentic and fine-grained subjective experience arises only when a stream of consciousness is generated within the device. It is valid under a widely believed assumption regarding interhemispheric connectivity and neuronal stimulus-invariance. (I will briefly explain my own evidence of human V1 not responding to changes in the contents of visual awareness [1])

If consciousness is actually generated within the device, we should be able to construct a case where two objects presented in the device's visual field are distinguishable by visual experience but not by what is communicated through the brain-machine interface. As strange as it may sound, and clearly violating the law of physics, this is likely to be happening in the intact brain, where unified subjective bilateral vision and its verbal report occur without the total interhemispheric exchange of conscious visual information.

Together, I present a hypothesis on the neural mechanism of consciousness, “The Chaotic Spatiotemporal Fluctuation Hypothesis” that passes the proposed test for visual qualia and also explains how physics that we know of today is violated. Here, neural activity is divided into two components, the time-averaged activity and the residual temporally fluctuating activity, where the former serves as the content of consciousness (neuronal population vector) and the latter as consciousness itself. The content is “read” into consciousness in the sense that, every local perturbation caused by change in the neuronal population vector creates a spatiotemporal wave in the fluctuation component that travels through out the system. Deterministic chaos assures that every local difference makes a difference to the whole of the dynamics, as in the butterfly effect, serving as a foundation for the holistic nature of consciousness. I will present data from simultaneous electrophysiology-fMRI recordings and human fMRI [2] that supports the existence of such large-scale causal fluctuation.

Here, the chaotic fluctuation cannot be decoded to trace back the original perturbation in the neuronal population vector, because initial states of all neurons are required with infinite precision to do so. Hence what is transmitted over the two hemispheres are not "information" in the normal sense. This illustrates the violation of physics by the metaphysical assumption, "chaotic spatiotemporal fluctuation is consciousness", where unification of bilateral vision and the solving of visual tasks (e.g. perfect symmetry detection) are achieved without exchanging the otherwise required Shannon information between the two hemispheres.

Finally, minimal and realistic versions of the proposed test for visual qualia can be conducted on laboratory animals to validate the hypothesis. It deals with two biological hemispheres, which we know already that it contains consciousness. We dissect interhemispheric connectivity and form instead an artificial one that is capable of filtering out the neural fluctuation component. A limited interhemispheric connectivity may be sufficient, which would drastically discount the technological challenge. If the subject is capable of conducting a bilateral stimuli matching task with the full artificial interhemispheric connectivity, but not when the fluctuation component is filtered out, it can be considered a strong supporting evidence of the hypothesis.

1.Watanabe, M., Cheng, K., Ueno, K., Asamizuya, T., Tanaka, K., Logothetis, N., Attention but not awareness modulates the BOLD signal in the human V1 during binocular suppression. Science, 2011. 334(6057): p. 829-31.

2.Watanabe, M., Bartels, A., Macke, J., Logothetis, N., Temporal jitter of the BOLD signal reveals a reliable initial dip and improved spatial resolution. Curr Biol, 2013. 23(21): p. 2146-50.

11 June 2014

  • Speaker: Stuart Hammeroff
  • Affiliation: University of Arizona, Tucson
  • Host: Gautam
  • Status: confirmed
  • Title: ‘Tuning the brain’ – Treating mental states through microtubule vibrations
  • Abstract: Do mental states derive entirely from brain neuronal membrane activities? Neuronal interiors are organized by microtubules (‘MTs’), protein polymers proposed to encode memory, process information and support consciousness. Using nanotechnology, Bandyopadhyay’s group at MIT has shown coherent vibrations (megahertz to 10 kilohertz) from microtubule bundles inside active neurons, vibrations (electric field potentials ~40 to 50 mV) able to influence membrane potentials. This suggests EEG rhythms are ‘beat’ frequencies of megahertz vibrations in microtubules inside neurons (Hameroff and Penrose, 2014), and that consciousness and cognition involve vibrational patterns resonating across scales in the brain, more like music than computation. MT megahertz may be a useful therapeutic target for ‘tuning’ mood and mental states. Among noninvasive transcranial brain stimulation techniques (TMS, TDcS), transcranial ultrasound (TUS) is megahertz mechanical vibrations. Applied at the scalp, low intensity, sub-thermal ultrasound (TUS) safely reaches the brain. In human studies, brief (15 to 30 seconds) TUS at 0.5, 2 and 8 megahertz to frontal-temporal cortex results in 40 minutes or longer of reported mood improvement, and focused TUS enhances sensory discrimination (Legon et al, 2014). In vitro, ultrasound promotes growth of neurite outgrowth in embryonic neurons (Raman), and stabilizes microtubules against disassembly (Gupta). (In Alzheimer’s disease, MTs disassemble and release tau.) These findings suggest ‘tuning the brain’ with TUS should be a safe, effective and inexpensive treatment for Alzheimer’s, traumatic brain injury, depression, anxiety, PTSD and other disorders.

References: Hameroff S, Penrose R (2014) Phys Life Rev http://www.sciencedirect.com/science/article/pii/S1571064513001188; Sahu et al (2013) Biosens Bioelectron 47:141–8; Sahu et al (2013) Appl Phys Lett 102:123701; Legon et al (2014) Nature Neuroscience 17: 322–329

25 June 2014

  • Speaker: Peter Loxley
  • Affiliation:
  • Host: Bruno
  • Status: confirmed
  • Title: The two-dimensional Gabor function adapted to natural image statistics: An analytical model of simple-cell responses in the early visual system
  • Abstract: TBA

2012/13 academic year

26 Sept 2012

  • Speaker: Jason Yeatman
  • Affiliation: Department of Psychology, Stanford University
  • Host: Bruno/Susana Chung
  • Status: confirmed
  • Title: The Development of White Matter and Reading Skills
  • Abstract: The development of cerebral white matter involves both myelination and pruning of axons, and the balance between these two processes may differ between individuals. Cross-sectional measures of white matter development mask the interplay between these active developmental processes and their connection to cognitive development. We followed a cohort of 39 children longitudinally for three years, and measured white matter development and reading development using diffusion tensor imaging and behavioral tests. In the left arcuate and inferior longitudinal fasciculus, children with above-average reading skills initially had low fractional anisotropy (FA) with a steady increase over the 3-year period, while children with below-average reading skills had higher initial FA that declined over time. We describe a dual-process model of white matter development that balances biological processes that have opposing effects on FA, such as axonal myelination and pruning, to explain the pattern of results.

8 Oct 2012

  • Speaker: Sophie Deneve
  • Affiliation: Laboratoire de Neurosciences cognitives, ENS-INSERM
  • Host: Bruno
  • Status: confirmed
  • Title: Balanced spiking networks can implement dynamical systems with predictive coding
  • Abstract: Neural networks can integrate sensory information and generate continuously varying outputs, even though individual neurons communicate only with spikes---all-or-none events. Here we show how this can be done efficiently if spikes communicate "prediction errors" between neurons. We focus on the implementation of linear dynamical systems and derive a spiking network model from a single optimization principle. Our model naturally accounts for two puzzling aspects of cortex. First, it provides a rationale for the tight balance and correlations between excitation and inhibition. Second, it predicts asynchronous and irregular firing as a consequence of predictive population coding, even in the limit of vanishing noise. We show that our spiking networks have error-correcting properties that make them far more accurate and robust than comparable rate models. Our approach suggests spike times do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly under-estimated.


19 Oct 2012

  • Speaker: Gert Van Dijck
  • Affiliation: Cambridge
  • Host: Urs
  • Status: confirmed
  • Title: A solution to identifying neurones using extracellular activity in awake animals: a probabilistic machine-learning approach
  • Abstract: Electrophysiological studies over the last fifty years have been hampered by the difficulty of reliably assigning signals to identified cortical neurones. Previous studies have employed a variety of measures based on spike timing or waveform characteristics to tentatively classify other neurone types (Vos et al., Eur. J. Neurosci., 1999; Prsa et al., J. Neurosci., 2009), in some cases supported by juxtacellular labelling (Simpson et al., Prog. Brain Res., 2005; Holtzman et al., J. Physiol., 2006; Barmack and Yakhnitsa, J. Neurosci., 2008; Ruigrok et al., J. Neurosci., 2011), or intracellular staining and / or assessment of membrane properties (Chadderton et al., Nature, 2004; Jorntell and Ekerot, J. Neurosci., 2006; Rancz et al., Nature, 2007). Anaesthetised animals have been widely used as they can provide a ground-truth through neuronal labelling which is much harder to achieve in awake animals where spike-derived measures tend to be relied upon (Lansink et al., Eur. J. Neurosci., 2010). Whilst spike-shapes carry potentially useful information for classifying neuronal classes, they vary with electrode type and the geometric relationship between the electrode and the spike generation zone (Van Dijck et al., Int. J. Neural Syst., 2012). Moreover, spike-shape measurement is achieved with a variety of techniques, making it difficult to compare and standardise between laboratories.In this study we build probabilistic models on the statistics derived from the spike trains of spontaneously active neurones in the cerebellum and the ventral midbrain. The mean spike frequency in combination with the log-interval-entropy (Bhumbra and Dyball, J. Physiol.-London, 2004) of the inter-spike-interval distribution yields the highest prediction accuracy. The cerebellum model consists of two sub-models: a molecular layer - Purkinje layer model and a granular layer - Purkinje layer model. The first model identifies with high accuracy (92.7 %) molecular layer interneurones and Purkinje cells, while the latter identifies with high accuracy (99.2 %) Golgi cells, granule cells, mossy fibers and Purkinje cells. Furthermore, it is shown that the model trained on anaesthetized rat and decerebrate cat data has broad applicability to other species and behavioural states: anaesthetized mice (80 %), awake rabbits (94.2 %) and awake rhesus monkeys (89 - 90 %).Recently, opto-genetics allow to obtain a ground-truth about cell classes. Using opto-genetically identified GABA-ergic and dopaminergic cells we build similar statistical models to identify these neuron types from the ventral midbrain.Hence, this illustrates that our approach will be of general use to a broad variety of laboratories.

Tuesday, 23 Oct 2012

  • Speaker: Jaimie Sleigh
  • Affiliation: University of Auckland
  • Host: Fritz/Andrew Szeri
  • Status: confirmed
  • Title: Is General Anesthesia a failure of cortical information integration
  • Abstract: General anesthesia and natural sleep share some commonalities and some differences. Quite a lot is known about the chemical and neuronal effects of general anesthetic drugs. There are two main groups of anesthetic drugs, which can be distinguished by their effects on the EEG. The most commonly used drugs exert a strong GABAergic action; whereas a second group is characterized by minimal GABAergic effects, but significant NMDA blockade. It is less clear which and how these various effects result in failure of the patient to wake up when the surgeon cuts them. I will present some results from experimental brain slice work, and theoretical mean field modelling of anesthesia and sleep, that support the idea that the final common mechanism of both types of anaesthesia is fragmentation of long distance information flow in the cortex.

31 Oct 2012 (Halloween)

  • Speaker: Jonathan Landy
  • Affiliation: UCSB
  • Host: Mike DeWeese
  • Status: Confirmed
  • Title: Mean-field replica theory: review of basics and a new approach
  • Abstract: Replica theory provides a general method for evaluating the mode of a distribution, and has varied applications to problems in statistical mechanics, signal processing, etc. Evaluation of the formal expressions arising in replica theory represents a formidable technical challenge, but one that physicists have apparently intuited correct methods for handling. In this talk, I will first provide a review of the historical development of replica theory, covering: 1) motivation, 2) the intuited ``Parisi-ansatz" solution, 3) continued controversies, and 4) a survey of applications (including to neural networks). Following this, I will discuss an exploratory effort of mine, aimed at developing an ansatz-free solution method. As an example, I will work out the phase diagram for a simple spin-glass model. This talk is intended primarily as a tutorial.

7 Nov 2012

  • Speaker: Tom Griffiths
  • Affiliation: UC Berkeley
  • Host:Daniel Little
  • Status: Confirmed
  • Title: Identifying human inductive biases
  • Abstract: People are remarkably good at acquiring complex knowledge from limited data, as is required in learning causal relationships, categories, or aspects of language. Successfully solving inductive problems of this kind requires having good "inductive biases" - constraints that guide inductive inference. Viewed abstractly, understanding human learning requires identifying these inductive biases and exploring their origins. I will argue that probabilistic models of cognition provide a framework that can facilitate this project, giving a transparent characterization of the inductive biases of ideal learners. I will outline how probabilistic models are traditionally used to solve this problem, and then present a new approach that uses Markov chain Monte Carlo algorithms as the basis for an experimental method that magnifies the effects of inductive biases.

19 Nov 2012 (Monday) (Thanksgiving week)

  • Speaker: Bin Yu
  • Affiliation: Dept. of Statistics and EECS, UC Berkeley
  • Host: Bruno
  • Status: confirmed
  • Title: Representation of Natural Images in V4
  • Abstract: The functional organization of area V4 in the mammalian ventral visual pathway is far from being well understood. V4 is believed to play an important role in the recognition of shapes and objects and in visual attention, but the complexity of this cortical area makes it hard to analyze. In particular, no current model of V4 has shown good predictions for neuronal responses to natural images and there is no consensus on the primary role of V4.

In this talk, we present analysis of electrophysiological data on the response of V4 neurons to natural images. We propose a new computational model that achieves comparable prediction performance for V4 as for V1 neurons. Our model does not rely on any pre-defined image features but only on invariance and sparse coding principles. We interpret our model using sparse principal component analysis and discover two groups of neurons: those selective to texture versus those selective to contours. This supports the thesis that one primary role of V4 is to extract objects from background in the visual field. Moreover, our study also confirms the diversity of V4 neurons. Among those selective to contours, some of them are selective to orientation, others to acute curvature features. (This is joint work with J. Mairal, Y. Benjamini, B. Willmore, M. Oliver and J. Gallant.)

30 Nov 2012

  • Speaker: Yan Karklin
  • Affiliation: NYU
  • Host: Tyler
  • Status: confirmed
  • Title:
  • Abstract:

10 Dec 2012 (note this would be the Monday after NIPS)

  • Speaker: Marius Pachitariu
  • Affiliation: Gatsby / UCL
  • Host: Urs
  • Status: confirmed
  • Title: NIPS paper "Learning visual motion in recurrent neural networks"
  • Abstract: We present a dynamic nonlinear generative model for visual motion based on a

latent representation of binary-gated Gaussian variables connected in a network. Trained on sequences of images by an STDP-like rule the model learns to represent different movement directions in different variables. We use an online approximate inference scheme that can be mapped to the dynamics of networks of neurons. Probed with drifting grating stimuli and moving bars of light, neurons in the model show patterns of responses analogous to those of direction-selective simple cells in primary visual cortex. We show how the computations of the model are enabled by a specific pattern of learnt asymmetric recurrent connections. I will also briefly discuss our application of recurrent neural networks as statistical models of simultaneously recorded spiking neurons.

12 Dec 2012

  • Speaker: Ian Goodfellow
  • Affiliation: U Montreal
  • Host: Bruno
  • Status: confirmed
  • Title:
  • Abstract:

7 Jan 2013

  • Speaker: Stuart Hammeroff
  • Affiliation: University of Arizona
  • Host: Gautam Agarwal
  • Status: confirmed
  • Title: Quantum cognition and brain microtubules
  • Abstract: Cognitive decision processes are generally seen as classical Bayesian probabilities, but better suited to quantum mathematics. For example: 1) Psychological conflict, ambiguity and uncertainty can be viewed as (quantum) superposition of multiple possible judgments and beliefs. 2) Measurement (e.g. answering a question, reaching a decision) reduces possibilities to definite states (‘constructing reality’, ‘collapsing the wave function’). 3) Previous questions influence subsequent answers, so sequence affects outcomes (‘contextual non-commutativity’). 4) Judgments and choices may deviate from classical logic, suggesting random, or ‘non-computable’ quantum influences. Can quantum cognition operate in the brain? Do classical brain activities simulate quantum processes? Or have biomolecular quantum devices evolved? In this talk I will discuss how a finer scale, intra-neuronal level of quantum information processing in cytoskeletal microtubules can accumulate, operate upon and integrate quantum information and memory for self-collapse to classical states which regulate axonal firings, controlling behavior.

Monday 14 Jan 2013, 1:00pm

  • Speaker: Dibyendu Mandal
  • Affiliation: Physics Dept., University of Maryland (Jarzynski group)
  • Host: Mike DeWeese
  • Status: confirmed
  • Title: An exactly solvable model of Maxwell’s demon
  • Abstract: The paradox of Maxwell’s demon has stimulated numerous thought experiments, leading to discussions about the thermodynamic implications of information processing. However, the field has lacked a tangible example or model of an autonomous, mechanical system that reproduces the actions of the demon. To address this issue, we introduce an explicit model of a device that can deliver work to lift a mass against gravity by rectifying thermal fluctuations, while writing information to a memory register. We solve for the steady-state behavior of the model and construct its nonequilibrium phase diagram. In addition to the engine-like action described above, we identify a Landauer eraser region in the phase diagram where the model uses externally supplied work to remove information from the memory register. Our model offers a simple paradigm for investigating the thermodynamics of information processing by exposing a transparent mechanism of operation.

23 Jan 2013

  • Speaker: Carlos Brody
  • Affiliation: Princeton
  • Host: Mike DeWeese
  • Status: confirmed
  • Title: Neural substrates of decision-making in the rat
  • Abstract: Gradual accumulation of evidence is thought to be a fundamental component of decision-making. Over the last 16 years, research in non-human primates has revealed neural correlates of evidence accumulation in parietal and frontal cortices, and other brain areas . However, the circuit mechanisms underlying these neural correlates remains unknown. Reasoning that a rodent model of evidence accumulation would allow a greater number of experimental subjects, and therefore experiments, as well as facilitate the use of molecular tools, we developed a rat accumulation of evidence task, the "Poisson Clicks" task. In this task, sensory evidence is delivered in pulses whose precisely-controlled timing varies widely within and across trials. The resulting data are analyzed with models of evidence accumulation that use the richly detailed information of each trial’s pulse timing to distinguish between different decision mechanisms. The method provides great statistical power, allowing us to: (1) provide compelling evidence that rats are indeed capable of gradually accumulating evidence for decision-making; (2) accurately estimate multiple parameters of the decision-making process from behavioral data; and (3) measure, for the first time, the diffusion constant of the evidence accumulator, which we show to be optimal (i.e., equal to zero). In addition, the method provides a trial-by-trial, moment-by-moment estimate of the value of the accumulator, which can then be compared in awake behaving electrophysiology experiments to trial-by-trial, moment-by-moment neural firing rate measures. Based on such a comparison, we describe data and a novel analysis approach that reveals differences between parietal and frontal cortices in the neural encoding of accumulating evidence. Finally, using semi-automated training methods to produce tens of rats trained in the Poisson Clicks accumulation of evidence task, we have also used pharmacological inactivation to ask, for the first time, whether parietal and frontal cortices are required for accumulation of evidence, and we are using optogenetic methods to rapidly and transiently inactivate brain regions so as to establish precisely when, during each decision-making trial, it is that each brain region's activity is necessary for performance of the task.

28 Jan 2013

  • Speaker: Eugene M. Izhikevich
  • Affiliation: Brain Corporation
  • Host: Fritz
  • Status: confirmed
  • Title: Spikes
  • Abstract: Most communication in the brain is via spikes. While we understand the spike-generation mechanism of individual neurons, we fail to appreciate the spike-timing code and its role in neural computations. The speaker starts with simple models of neuronal spiking and bursting, describes small neuronal circuits that learn spike-timing code via spike-timing dependent plasticity (STDP), and finishes with biologically detailed and anatomically accurate large-scale brain models.

29 Jan 2013

  • Speaker: Goren Gordon
  • Affiliation: Weizman Intitute
  • Host: Fritz
  • Status: confirmed
  • Title: Hierarchical Curiosity Loops – Model, Behavior and Robotics
  • Abstract: Autonomously learning about one's own body and its interaction with the environment is a formidable challenge, yet it is ubiquitous in biology: every animal’s pup and every human infant accomplish this task in their first few months of life. Furthermore, biological agents’ curiosity actively drives them to explore and experiment in order to expedite their learning progress. To bridge the gap between biological and artificial agents, a formal mathematical theory of curiosity was developed that attempts to explain observed biological behaviors and enable curiosity emergence in robots. In the talk, I will present the hierarchical curiosity loops model, its application to rodent’s exploratory behavior and its implementation in a fully autonomously learning and behaving reaching robot.

29 Jan 2013

  • Speaker: Jenny Read
  • Affiliation: Institute of Neuroscience, Newcastle University
  • Host: Sarah
  • Status: confirmed
  • Title: Stereoscopic vision
  • Abstract: [To be written]

7 Feb 2013

  • Speaker: Valero Laparra
  • Affiliation: University of Valencia
  • Host: Bruno
  • Status: confirmed
  • Title: Empirical statistical analysis of phases in Gabor filtered natural images
  • Abstract:

20 Feb 2013

  • Speaker: Dolores Bozovic
  • Affiliation: UCLA
  • Host: Mike DeWeese
  • Status: confirmed
  • Title: Bifurcations and phase-locking dynamics in the auditory system
  • Abstract: The inner ear constitutes a remarkable biological sensor that exhibits nanometer-scale sensitivity of mechanical detection. The first step in auditory processing is performed by hair cells, which convert movement into electrical signals via opening of mechanically gated ion channels. These cells are operant in a viscous medium, but can nevertheless sustain oscillations, amplify incoming signals, and even exhibit spontaneous motility, indicating the presence of an underlying active amplification system. Theoretical models have proposed that a hair cell constitutes a nonlinear system with an internal feedback mechanism that can drive it across a bifurcation and into an unstable regime. Our experiments explore the nonlinear response as well as feedback mechanisms that enable self-tuning already at the peripheral level, as measured in vitro on sensory tissue. A simple dynamic systems framework will be discussed, that captures the main features of the experimentally observed behavior in the form of an Arnold Tongue.

27 March 2013

  • Speaker: Dale Purves
  • Affiliation: Duke
  • Host: Sarah
  • Status: confirmed
  • Title: How Visual Evolution Determines What We See
  • Abstract: Information about the physical world is excluded from visual stimuli by the nature of biological vision (the inverse optics problem). Nonetheless, humans and other visual animals routinely succeed in their environments. The talk will explain how the assignment of perceptual values to visual stimuli according to the frequency of occurrence of stimulus patterns resolves the inverse problem and determines the basic visual qualities we see. This interpretation of vision implies that the best (and perhaps the only) way to understand visual system circuitry is to evolve it, an idea supported by recent work.

9 April 2013

  • Speaker: Mounya Elhilali
  • Affiliation: Johns Hopkins
  • Host: Tyler
  • Status: confirmed
  • Title: Attention at the cocktail party: Neural bases and computational strategies for auditory scene analysis
  • Abstract: The perceptual organization of sounds in the environment into coherent objects is a feat constantly facing the auditory system. It manifests itself in the everyday challenge faced by humans and animals alike to parse complex acoustic information arising from multiple sound sources into separate auditory streams. While seemingly effortless, uncovering the neural mechanisms and computational principles underlying this remarkable ability remain a challenge for both the experimental and theoretical neuroscience communities. In this talk, I discuss the potential role of neuronal tuning in mammalian primary auditory cortex in mediating this process. I also examine the role of mechanisms of attention in adapting this neural representation to reflect both the sensory content and the changing behavioral context of complex acoustic scenes.

17th of April 2013

  • Speaker: Wiktor Młynarski
  • Affiliation: Max Planck Institute for Mathematics in the Sciences
  • Host: Urs
  • Status: confirmed
  • Title: Statistical Models of Binaural Sounds
  • Abstract: The auditory system exploits disparities in the sounds arriving at the left and right ear to extract information about the spatial configuration of sound sources. According to the widely acknowledged Duplex Theory, sounds of low frequency are localized based on Interaural Time Differences (ITDs) and localization of high frequency sources relies on Interaural Level Differences (ILDs). Natural sounds, however, possess a rich structure and contain multiple frequency components. This leads to the question: what are the contributions of different cues to sound position identification in the natural environment and how much information do they carry about its spatial structure? In this talk, I will present my attempts to answer the above questions using statistical, generative models of naturalistic (simulated) and fully natural binaural sounds.

15 May 2013

  • Speaker: Byron Yu
  • Affiliation: CMU
  • Host: Bruno/Jose (jointly sponsored with CNEP)
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

22 May 2013

  • Speaker: Bijan Pesaran
  • Affiliation: NYU
  • Host: Bruno/Jose (jointly sponsored with CNEP)
  • Status: confirmed
  • Title: TBA
  • Abstract: TBA

2011/12 academic year

15 Sep 2011 (Thursday, at noon)

  • Speaker: Kathrin Berkner
  • Affiliation: Ricoh Innovations Inc.
  • Host: Ivana Tosic
  • Status: Confirmed
  • Title: TBD
  • Abstract: TBD

21 Sep 2011

  • Speaker: Mike Kilgard
  • Affiliation: UT Dallas
  • Host: Michael Silver
  • Status: Confirmed
  • Title:
  • Abstract:

27 Sep 2011

  • Speaker: Moshe Gur
  • Affiliation: Dept. of Biomedical Engineering, Technion, Israel Institute of Technology
  • Host: Bruno/Stan
  • Status: Confirmed
  • Title: On the unity of perception: How does the brain integrate activity evoked at different cortical loci?
  • Abstract: Any physical device we know, including computers, when comparing A to B must send the information to point C. I have done experiments in three modalities, somato-sensory, auditory, and visual, where 2 different loci at the primary cortex are stimulated and I argue that the "machine" converging hypothesis cannot explain the perceptual results. Thus we must assume a non-converging mechanism whereby the brain, at times, can compare (integrate, process) events that take place at different loci without sending the information to a common target. Once we allow for such a mechanism, many phenomena can be viewed differently. Take for example the question of how and where does multi-sensory integration take place; we perceive a synchronized talking face yet detailed visual and auditory information are represented at very different brain loci.

5 Oct 2011

  • Speaker: Susanne Still
  • Affiliation: University of Hawaii at Manoa
  • Host: Jascha
  • Status: confirmed
  • Title: Predictive power, memory and dissipation in learning systems operating far from thermodynamic equilibrium
  • Abstract: Understanding the physical processes that underly the functioning of biological computing machinery often requires describing processes that occur far from thermodynamic equilibrium. In recent years significant progress has been made in this area, most notably Jarzynski’s work relation and Crooks’ fluctuation theorem. In this talk I will explore how dissipation of energy is related to a system's information processing inefficiency. The focus is on driven systems that are embedded in a stochastic operating environment. If we describe the system as a state machine, then we can interpret the stochastic dynamics as performing a computation that results in an (implicit) model of the stochastic driving signal. I will show that instantaneous non-predictive information, which serves as a measure of model inefficiency, provides a lower bound on the average dissipated work. This implies that learning systems with larger predictive power can operate more energetically efficiently. We could speculate that perhaps biological systems may have evolved to reflect this kind of adaptation. One interesting insight here is that purely physical notions require what is perfectly in line with the general belief that a useful model must be predictive (at fixed model complexity). Our result thereby ties together ideas from learning theory with basic non-equilibrium thermodynamics.

19 Oct 2011

  • Speaker: Graham Cummins
  • Affiliation: WSU
  • Host: Jeff Teeters
  • Status: Confirmed
  • Title:
  • Abstract:

26 Oct 2011

  • Speaker: Shinji Nishimoto
  • Affiliation: Gallant lab, UC Berkeley
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract:

14 Dec 2011

  • Speaker: Austin Roorda
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: Confirmed
  • Title: How the unstable eye sees a stable and moving world
  • Abstract:

11 Jan 2012

  • Speaker: Ken Nakayama
  • Affiliation: Harvard University
  • Host: Bruno
  • Status: confirmed
  • Title: Subjective Contours
  • Abstract: The concept of the receptive field in visual science has been transformative. It fueled great discoveries of the second half of the 20th C, providing the dominant understanding of how the visual system works at its early stages. Its reign has been extended to the field of object recognition where in the form of a linear classifier, it provides a framework to understand visual object recognition (DiCarlo and Cox, 2007).

Untamed, however, are areas of visual perception, now more or less ignored, dubbed variously as the 2.5 D sketch, mid-level vision, surface representations. Here, neurons with their receptive fields seem unable to bridge the gap, to supply us with even a plausible speculative framework to understand amodal completion, subjective contours and other surface phenomena. Correspondingly, these areas have become backwater, ignored, leapt over. Subjective contours, however, remain as vivid as ever, even more so. Everyday, our visual system makes countless visual inferences as to the layout of the world surfaces and objects. What’s remarkable is that subjective contours visibly reveal these inferences.

Tuesday, 24 Jan 2012

  • Speaker: Aniruddha Das
  • Affiliation: Columbia University
  • Host: Fritz
  • Status: confirmed
  • Title:
  • Abstract:

22 Feb 2012

  • Speaker: Elad Schneidman
  • Affiliation: Department of Neurobiology, Weizmann Institute of Science
  • Host: Bruno
  • Status: confirmed
  • Title: Sparse high order interaction networks underlie learnable neural population codes
  • Abstract:

29 Feb 2012 (at noon as usual)

  • Speaker: Heather Read
  • Affiliation: U. Connecticut
  • Host: Mike DeWeese
  • Status: confirmed
  • Title: "Transformation of sparse temporal coding from auditory colliculus and cortex"
  • Abstract: TBD

1 Mar 2012 (note: Thurs)

  • Speaker: Daniel Zoran
  • Affiliation: Hebrew University, Jerusalem
  • Host: Bruno
  • Status: confirmed
  • Title: TBA
  • Abstract:

7 Mar 2012

  • Speaker: David Sivak
  • Affiliation: UCB
  • Host: Mike DeWeese
  • Status: Confirmed
  • Title: TBA
  • Abstract:

8 Mar 2012

  • Speaker: Ivan Schwab
  • Affiliation: UC Davis
  • Host: Bruno
  • Status: Confirmed
  • Title: Evolution's Witness: How Eyes Evolved
  • Abstract:

14 Mar 2012

  • Speaker: David Sussillo
  • Affiliation:
  • Host: Jascha
  • Status: confirmed
  • Title:
  • Abstract:

18 April 2012

  • Speaker: Kristofer Bouchard
  • Affiliation: UCSF
  • Host: Bruno
  • Status: confirmed
  • Title: Cortical Foundations of Human Speech Production
  • Abstract:

23 May 2012 (rescheduled from April 11)

  • Speaker: Logan Grosenick
  • Affiliation: Stanford, Deisseroth & Suppes Labs
  • Host: Jascha
  • Status: confirmed
  • Title: Acquisition, creation, & analysis of 4D light fields with applications to calcium imaging & optogenetics
  • Abstract: In Light Field Microscopy (LFM), images can be computationally refocused after they are captured [1]. This permits acquiring focal stacks and reconstructing volumes from a single camera frame. In Light Field Illumination (LFI), the same ideas can be used to create an illumination system that can deliver focused light to any position in a volume without moving optics, and these two devices (LFM/LFI) can be used together in the same system [2]. So far, these imaging and illumination systems have largely been used independently in proof-of-concept experiments [1,2]. In this talk I will discuss applications of a combined scanless volumetric imaging and volumetric illumination system applied to 4D calcium imaging and photostimulation of neurons in vivo and in vitro. The volumes resulting from these methods are large (>500,000 voxels per time point), collected at 10-100 frames per second, and highly correlated in space and time. Analyzing such data has required the development and application of machine learning methods appropriate to large, sparse, nonnegative data, as well as the estimation of neural graphical models from calcium transients. This talk will cover the reconstruction and creation of volumes in a microscope using Light Fields [1,2], and the current state-of-the-art for analyzing these large volumes in the context of calcium imaging and optogenetics.

[1] M. Levoy, R. Ng, A. Adams, M. Footer, and M. Horowitz. Light Field Microscopy. ACM Transactions on Graphics 25(3), Proceedings of SIGGRAPH 2006. [2] M. Levoy, Z. Zhang, and I. McDowall. Recording and controlling the 4D light field in a microscope. Journal of Microscopy, Volume 235, Part 2, 2009, pp. 144-162. Cover article.

BIO: Logan received bachelors degrees with honors in Biology and Psychology from Stanford, and a Masters in Statistics from Stanford. He is a Ph.D. candidate in the Neurosciences Program working in the labs of Karl Deisseroth and Patrick Suppes, and a trainee at the Stanford Center for Mind, Brain, and Computation. He is interested in developing and applying novel computational imaging and machine learning techniques in order to observe, control, and understand neuronal circuit dynamics.

7 June 2012 (Thursday)

  • Speaker: Mitya Chklovskii
  • Affiliation: janelia
  • Host: Bruno
  • Status:
  • Title:
  • Abstract

27 June 2012

  • Speaker: Jerry Feldman
  • Affiliation:
  • Host: Bruno
  • Status:
  • Title:
  • Abstract:

30 July 2012

  • Speaker: Lucas Theis
  • Affiliation: Matthias Bethge lab, Werner Reichardt Centre for Integrative Neuroscience, Tübingen
  • Host: Jascha
  • Status: Confirmed
  • Title: Hierarchical models of natural images
  • Abstract: Probabilistic models of natural images have been used to solve a variety of computer vision tasks as well as a means to better understand the computations performed by the visual system in the brain. A lot of theoretical considerations and biological observations point to the fact that natural image models should be hierarchically organized, yet to date, the best known models are still based on what is better described as shallow representations. In this talk, I will present two image models. One is based on the idea of Gaussianization for greedily constructing hierarchical generative models. I will show that when combined with independent subspace analysis, it is able to compete with the state of the art for modeling image patches. The other model combines mixtures of Gaussian scale mixtures with a directed graphical model and multiscale image representations and is able to generate highly structured images of arbitrary size. Evaluating the model's likelihood and comparing it to a large number of other image models shows that it might well be the best model for natural images yet.

(joint work with Reshad Hosseini and Matthias Bethge)

2010/11 academic year

02 Sep 2010

  • Speaker: Johannes Burge
  • Affiliation: University of Texas at Austin
  • Host: Jimmy
  • Status: Confirmed
  • Title:
  • Abstract:

8 Sep 2010

  • Speaker: Tobi Szuts
  • Affiliation: Meister Lab/ Harvard U.
  • Host: Mike DeWeese
  • Status: Confirmed
  • Title: Wireless recording of neural activity in the visual cortex of a freely moving rat.
  • Abstract: Conventional neural recording systems restrict behavioral experiments to a flat indoor environment compatible with the cable that tethers the subject to the recording instruments. To overcome these constraints, we developed a wireless multi-channel system for recording neural signals from a freely moving animal the size of a rat or larger. The device takes up to 64 voltage signals from implanted electrodes, samples each at 20 kHz, time-division multiplexes them onto a single output line, and transmits that output by radio frequency to a receiver and recording computer up to >60 m away. The system introduces less than 4 ?V RMS of electrode-referred noise, comparable to wired recording systems and considerably less than biological noise. The system has greater channel count or transmission distance than existing telemetry systems. The wireless system has been used to record from the visual cortex of a rat during unconstrained conditions. Outdoor recordings show V1 activity is modulated by nest-building activity. During unguided behavior indoors, neurons responded rapidly and consistently to changes in light level, suppressive effects were prominent in response to an illuminant transition, and firing rate was strongly modulated by locomotion. Neural firing in the visual cortex is relatively sparse and moderate correlations are observed over large distances, suggesting that synchrony is driven by global processes.

29 Sep 2010

  • Speaker: Vikash Gilja
  • Affiliation: Stanford University
  • Host: Charles
  • Status: Confirmed
  • Title: Towards Clinically Viable Neural Prosthetic Systems.
  • Abstract:

20 Oct 2010

  • Speaker: Alexandre Francois
  • Affiliation: USC
  • Host:
  • Status: Confirmed
  • Title:
  • Abstract:

3 Nov 2010

  • Speaker: Eric Jonas and Vikash Mansinghka
  • Affiliation: Navia Systems
  • Host: Jascha
  • Status: Confirmed
  • Title: Natively Probabilistic Computation: Principles, Artifacts, Architectures and Applications
  • Abstract: Complex probabilistic models and Bayesian inference are becoming

increasingly critical across science and industry, especially in large-scale data analysis. They are also central to our best computational accounts of human cognition, perception and action. However, all these efforts struggle with the infamous curse of dimensionality. Rich probabilistic models can seem hard to write and even harder to solve, as specifying and calculating probabilities often appears to require the manipulation of exponentially (and sometimes infinitely) large tables of numbers.

We argue that these difficulties reflect a basic mismatch between the needs of probabilistic reasoning and the deterministic, functional orientation of our current hardware, programming languages and CS theory. To mitigate these issues, we have been developing a stack of abstractions for natively probabilistic computation, based around stochastic simulators (or samplers) for distributions, rather than evaluators for deterministic functions. Ultimately, our aim is to produce a model of computation and the associated hardware and programming tools that are as suited for uncertain inference and decision-making as our current computers are for precise arithmetic.

In this talk, we will give an overview of the entire stack of abstractions supporting natively probabilistic computation, with technical detail on several hardware and software artifacts we have implemented so far. we will also touch on some new theoretical results regarding the computational complexity of probabilistic programs. Throughout, we will motivate and connect this work to some current applications in biomedical data analysis and computer vision, as well as potential hypotheses regarding the implementation of probabilistic computation in the brain.

This talk includes joint work with Keith Bonawitz, Beau Cronin, Cameron Freer, Daniel Roy and Joshua Tenenbaum.

BRIEF BIOGRAPHY

Vikash Mansinghka is a co-founder and the CTO of Navia Systems, a venture-funded startup company building natively probabilistic computing machines. He spent 10 years at MIT, eventually earning an SB. in Mathematics, an SB. in Computer Science, an MEng in Computer Science, and a PhD in Computation. He held graduate fellowships from the NSF and MIT's Lincoln Laboratories, and his PhD dissertation won the 2009 MIT George M. Sprowls award for best dissertation in computer science. He currently serves on DARPA's Information Science and Technology (ISAT) Study Group.

Eric Jonas is a co-founder of Navia Systems, responsible for in-house accelerated inference research and development. He spent ten years at MIT, where he earned SB degrees in electrical engineering and computer science and neurobiology, an MEng in EECS, with a neurobiology PhD expected really soon. He’s passionate about biological applications of probabilistic reasoning and hopes to use Navia’s capabilities to combine data from biological science, clinical histories, and patient outcomes into seamless models.

8 Nov 2010

  • Speaker: Patrick Ruther
  • Affiliation: Imtek, University of Freiburg
  • Host: Tim
  • Status: Confirmed
  • Title: TBD
  • Abstract: TBD

10 Nov 2010

  • Speaker: Aurel Lazar
  • Affiliation: Department of Electrical Engineering, Columbia University
  • Host: Bruno
  • Status: Confirmed
  • Title: Encoding Visual Stimuli with a Population of Hodgkin-Huxley Neurons
  • Abstract: We first present a general framework for the reconstruction of natural video

scenes encoded with a population of spiking neural circuits with random thresholds. The visual encoding system consists of a bank of filters, modeling the visual receptive fields, in cascade with a population of neural circuits, modeling encoding with spikes in the early visual system. The neuron models considered include integrate-and-fire neurons and ON-OFF neuron pairs with threshold-and-fire spiking mechanisms. All thresholds are assumed to be random. We show that for both time-varying and space-time-varying stimuli neural spike encoding is akin to taking noisy measurements on the stimulus. Second, we formulate the reconstruction problem as the minimization of a suitable cost functional in a finite-dimensional vector space and provide an explicit algorithm for stimulus recovery. We also present a general solution using the theory of smoothing splines in Reproducing Kernel Hilbert Spaces. We provide examples of both synthetic video as well as for natural scenes and show that the quality of the reconstruction degrades gracefully as the threshold variability of the neurons increases. Third, we demonstrate a number of simple operations on the original visual stimulus including translations, rotations and zooming. All these operations are natively executed in the spike domain. The processed spike trains are decoded for the faithful recovery of the stimulus and its transformations. Finally, we extend the above results to neural encoding circuits built with Hodking-Huxley neurons. References: Aurel A. Lazar, Eftychios A. Pnevmatikakis and Yiyin Zhou, Encoding Natural Scenes with Neural Circuits with Random Thresholds, Vision Research, 2010, Special Issue on Mathematical Models of Visual Coding, http://dx.doi.org/10.1016/j.visres.2010.03.015 Aurel A. Lazar, Population Encoding with Hodgkin-Huxley Neurons, IEEE Transactions on Information Theory, Volume 56, Number 2, pp. 821-837, February, 2010, Special Issue on Molecular Biology and Neuroscience, http://dx.doi.org/10.1109/TIT.2009.2037040

11 Nov 2010 (UCB holiday)

  • Speaker: Martha Nari Havenith
  • Affiliation: UCL
  • Host: Fritz
  • Status: Confirmed
  • Title: Finding spike timing in the visual cortex - Oscillations as the internal clock of vision?
  • Abstract:

19 Nov 2010 (note: on Friday because of SFN)

  • Speaker: Dan Butts
  • Affiliation: UMD
  • Host: Tim
  • Status: Confirmed
  • Title: Common roles of inhibition in visual and auditory processing.
  • Abstract: The role of inhibition in sensory processing is often obscured in extracellular recordings, because the absence of a neuronal response associated with inhibition might also be explained by a simple lack of excitation. However, increasingly, evidence from intracellular recordings demonstrates important roles of inhibition in shaping the stimulus selectivity of sensory neurons in both the visual and auditory systems. We have developed a nonlinear modeling approach that can identify putative excitatory and inhibitory inputs to a neuron using standard extracellular recordings, and have applied these techniques to understand the role of inhibition in shaping sensory processing in visual and auditory areas. In pre-cortical visual areas (retina and LGN), we find that inhibition likely plays a role in generating temporally precise responses, and mediates adaptation to changing contrast. In an auditory pre-cortical area (inferior colliculus) identified inhibition has nearly identical appearance and functions in temporal processing and adaptation. Thus, we predict common roles of inhibition in these sensory areas, and more generally demonstrate general methods for characterizing the nonlinear computations that comprise sensory processing.

24 Nov 2010

  • Speaker: Eizaburo Doi
  • Affiliation: NYU
  • Host: Jimmy
  • Status: Confirmed
  • Title:
  • Abstract:


29 Nov 2010 - informal talk

  • Speaker: Eero Lehtonen
  • Affiliation: UTU Finland
  • Host: Bruno
  • Status: Confirmed
  • Title: Memristors
  • Abstract:

1 Dec 2010

  • Speaker: Gadi Geiger
  • Affiliation: MIT
  • Host: Fritz
  • Status: Confirmed
  • Title: Visual and Auditory Perceptual Modes that Characterize Dyslexics
  • Abstract: I will describe how dyslexics’ visual and auditory perception is wider and more diffuse than that of typical readers. This suggests wider neural tuning in dyslexics. In addition I will describe how this processing relates to difficulties in reading. Strengthening the argument and more so helping dyslexics I will describe a regimen of practice that results in improved reading in dyslexics while narrowing perception.


13 Dec 2010

  • Speaker: Jorg Lueke
  • Affiliation: FIAS
  • Host: Bruno
  • Status: Confirmed
  • Title: Linear and Non-linear Approaches to Component Extraction and Their Applications to Visual Data
  • Abstract: In the nervous system of humans and animals, sensory data are represented as combinations of elementary data components. While for data such as sound waveforms the elementary components combine linearly, other data can better be modeled by non-linear forms of component superpositions. I motivate and discuss two models with binary latent variables: one using standard linear superpositions of basis functions and one using non-linear superpositions. Crucial for the applicability of both models are efficient learning procedures. I briefly introduce a novel training scheme (ET) and show how it can be applied to probabilistic generative models. For linear and non-linear models the scheme efficiently infers the basis functions as well as the level of sparseness and data noise. In large-scale applications to image patches, we show results on the statistics of inferred model parameters. Differences between the linear and non-linear models are discussed, and both models are compared to results of standard approaches in the literature and to experimental findings. Finally, I briefly discuss learning in a recent model that takes explicit component occlusions into account.

15 Dec 2010

  • Speaker: Claudia Clopath
  • Affiliation: Universite Paris Decartes
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:


18 Jan 2011

  • Speaker: Siwei Lyu
  • Affiliation: Computer Science Department, University at Albany, SUNY
  • Host: Bruno
  • Status: confirmed
  • Title: Divisive Normalization as an Efficient Coding Transform: Justification and Evaluation
  • Abstract:

19 Jan 2011

  • Speaker: David Field (informal talk)
  • Affiliation:
  • Host: Bruno
  • Status: Tentative
  • Title:
  • Abstract:

25 Jan 2011

  • Speaker: Ruth Rosenholtz
  • Affiliation: Dept. of Brain & Cognitive Sciences, Computer Science and AI Lab, MIT
  • Host: Bruno
  • Status: Confirmed
  • Title: What your visual system sees where you are not looking
  • Abstract:

26 Jan 2011

  • Speaker: Ernst Niebur
  • Affiliation: Johns Hopkins U
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

16 March 2011

  • Speaker: Vladimir Itskov
  • Affiliation: University of Nebraska-Lincoln
  • Host: Chris
  • Status: Confirmed
  • Title:
  • Abstract:

23 March 2011

  • Speaker: Bruce Cumming
  • Affiliation: National Institutes of Health
  • Host: Ivana
  • Status: Confirmed
  • Title: TBD
  • Abstract:

27 April 2011

  • Speaker: Lubomir Bourdev
  • Affiliation: Computer Science, UC Berkeley
  • Host:Bruno
  • Status: Confirmed
  • Title: "Poselets and Their Applications in High-Level Computer Vision Problems"
  • Abstract:

12 May 2011 (note: Thursday)

  • Speaker: Jack Culpepper
  • Affiliation: Redwood Center/EECS
  • Host: Bruno
  • Status: Confirmed
  • Title: TBA
  • Abstract:

26 May 2011

  • Speaker: Ian Stevenson
  • Affiliation: Northwestern University
  • Host: Bruno
  • Status: Confirmed
  • Title: Explaining tuning curves by estimating interactions between neurons
  • Abstract: One of the central tenets of systems neuroscience is that tuning curves are a byproduct of the interactions between neurons. Using multi-electrode recordings and recently developed inference techniques we can begin to examine this idea in detail and study how well we can explain the functional properties of neurons using the activity of other simultaneously recorded neurons. Here we examine datasets from 6 different brain areas recorded during typical sensorimotor tasks each with ~100 simultaneously recorded neurons. Using these datasets we measured the extent to which interactions between neurons can explain the tuning properties of individual neurons. We found that, in almost all areas, modeling interactions between 30-50 neurons allows more accurate spike prediction than tuning curves. This suggests that tuning can, in some sense, be explained by interactions between neurons in a variety of brain areas, even when recordings consist of relatively small numbers of neurons.

1 June 2011

  • Speaker: Michael Oliver
  • Affiliation: Gallant lab
  • Host: Bruno
  • Status: Tentative
  • Title:
  • Abstract:

8 June 2011

  • Speaker: Alyson Fletcher
  • Affiliation: UC Berkeley
  • Host: Bruno
  • Status: tentative
  • Title: Generalized Approximate Message Passing for Neural Receptive Field Estimation and Connectivity
  • Abstract: Fundamental to understanding sensory encoding and connectivity of neurons are effective tools for developing and validating complex mathematical models from experimental data. In this talk, I present a graphical models approach to the problems of neural connectivity reconstruction under multi-neuron excitation and to receptive field estimation of sensory neurons in response to stimuli. I describe a new class of Generalized Approximate Message Passing (GAMP) algorithms for a general class of inference problems on graphical models based Gaussian approximations of loopy belief propagation. The GAMP framework is extremely general, provides a systematic procedure for incorporating a rich class of nonlinearities, and is computationally tractable with large amounts of data. In addition, for both the connectivity reconstruction and parameter estimation problems, I show that GAMP-based estimation can naturally incorporate sparsity constraints in the model that arise from the fact that only a small fraction of the potential inputs have any influence on the output of a particular neuron. A simulation of reconstruction of cortical neural mapping under multi-neuron excitation shows that GAMP offers improvement over previous compressed sensing methods. The GAMP method is also validated on estimation of linear nonlinear Poisson (LNP) cascade models for neural responses of salamander retinal ganglion cells.

2009/10 academic year

2 September 2009

  • Speaker: Keith Godfrey
  • Affiliation: University of Cambridge
  • Host: Tim
  • Status: Confirmed
  • Title: TBA
  • Abstract:

7 October 2009

  • Speaker: Anita Schmid
  • Affiliation: Cornell University
  • Host: Kilian
  • Status: Confirmed
  • Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time
  • Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.

28 October 2009

  • Speaker: Andrea Benucci
  • Affiliation: Institute of Ophthalmology, University College London
  • Host: Bruno
  • Status: Confirmed
  • Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex
  • Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.

12 November 2009 (Thursday)

  • Speaker: Song-Chun Zhu
  • Affiliation: UCLA
  • Host: Jimmy
  • Status: Confirmed
  • Title:
  • Abstract:

18 November 2009

  • Speaker: Dan Graham
  • Affiliation: Dept. of Mathematics, Dartmouth College
  • Host: Bruno
  • Status: Confirmed
  • Title: The Packet-Switching Brain: A Hypothesis
  • Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.

16 December 2009

  • Speaker: Pietro Berkes
  • Affiliation: Volen Center for Complex Systems, Brandeis University
  • Host: Bruno
  • Status: Confirmed
  • Title: Generative models of vision: from sparse coding toward structured models
  • Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.

6 January 2010

  • Speaker: Susanne Still
  • Affiliation: U of Hawaii
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

20 January 2010

  • Speaker: Tom Dean
  • Affiliation: Google
  • Host: Bruno
  • Status: Confirmed
  • Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors
  • Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.

27 January 2010

  • Speaker: David Philiponna
  • Affiliation: Paris
  • Host: Bruno
  • Status: Confirmed
  • Title:
  • Abstract:

'24 Feburary 2010

  • Speaker: Gordon Pipa
  • Affiliation: U Osnabrueck/MPI Frankfurt
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

3 March 2010

  • Speaker: Gaute Einevoll
  • Affiliation: UMB, Norway
  • Host: Amir
  • Status: Confirmed
  • Title: TBA
  • Abstract: TBA


4 March 2010

  • Speaker: Harvey Swadlow
  • Affiliation:
  • Host: Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

8 April 2010

  • Speaker: Alan Yuille
  • Affiliation: UCLA
  • Host: Amir
  • Status: Confirmed (for 1pm)
  • Title:
  • Abstract:

28 April 2010

  • Speaker: Dharmendra Modha - cancelled
  • Affiliation: IBM
  • Host:Fritz
  • Status: Confirmed
  • Title:
  • Abstract:

5 May 2010

  • Speaker: David Zipser
  • Affiliation: UCB
  • Host: Daniel Little
  • Status: Tentative
  • Title: Brytes 2:
  • Abstract:

Brytes are little brains that can be assembled into larger, smarter brains. In my first talk I presented a biologically plausible, computationally tractable model of brytes and described how they can be used as subunits to build brains with interesting behaviors.

In this talk I will first show how large numbers of brytes can cooperate to perform complicated actions such as arm and hand manipulations in the presence of obstacles. Then I describe a strategy for a higher level of control that informs each bryte what role it should play in accomplishing the current task. These results could have considerable significance for understanding the brain and possibly be applicable to robotics and BMI.

12 May 2010

  • Speaker: Frank Werblin (Redwood group meeting - internal only)
  • Affiliation: Berkeley
  • Host: Bruno
  • Status: Tentative
  • Title:
  • Abstract:

19 May 2010

  • Speaker: Anna Judith
  • Affiliation: UCB
  • Host: Daniel Little (Redwood Lab Meeting - internal only)
  • Status: confirmed
  • Title:
  • Abstract: