Seminars: Difference between revisions
From RedwoodCenter
Jump to navigationJump to search
Line 42: | Line 42: | ||
* Abstract: | * Abstract: | ||
'''14 April 2010''' | '''14 April 2010 (Bob Full @ CIS seminar)''' | ||
* Speaker: Vikash Mansinghka | * Speaker: Vikash Mansinghka | ||
* Affiliation: | * Affiliation: |
Revision as of 03:19, 15 March 2010
Instructions
- Check the internal calendar (here) for a free seminar slot. If a seminar is not already booked at the regular time of noon on Wednesday, you can reserve it.
- Fill in the speaker information in the 'tentative/confirmed speaker' section. Leave the status flag in 'tentative'. Please include your name and email as host in case somebody wants to contact you.
- Invite a speaker.
- Change the status flag to 'confirmed'. Notify Jimmy [1] that we have a confirmed speaker so that he can update the public web page. Please include a title and abstract.
- Jimmy will also send out an announcement. And if the speaker needs accommodations you should contact Kati [2].
- After the seminar have the speaker submit travel expenses to Jadine Palapaz [3] at RES for reimbursement. You can get a travel reimbursement form online and give to the speaker so they can submit everything before they leave if they have all their receipts on hand, otherwise they can mail it in afterwards.
Tentative / Confirmed Speakers
17 March 2010
- Speaker:
- Affiliation:
- Host:
- Status:
- Title:
- Abstract:
24 March 2010 (note: Spring break)
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
31 March 2010 (Maharbiz @ CIS seminar)
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
7 April 2010
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
14 April 2010 (Bob Full @ CIS seminar)
- Speaker: Vikash Mansinghka
- Affiliation:
- Host: Jascha
- Status: Tentative
- Title: ?Natively probabilistic computing?
- Abstract:
21 April 2010
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
28 April 2010
- Speaker: Dharmendra Modha
- Affiliation: IBM
- Host:Fritz
- Status: Confirmed
- Title:
- Abstract:
5 May 2010
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
12 May 2010
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
19 May 2010
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
26 May 2010
- Speaker:
- Affiliation:
- Host:
- Status: Tentative
- Title:
- Abstract:
Previous Seminars
2009/10 academic year
2 September 2009
- Speaker: Keith Godfrey
- Affiliation: University of Cambridge
- Host: Tim
- Status: Confirmed
- Title: TBA
- Abstract:
7 October 2009
- Speaker: Anita Schmid
- Affiliation: Cornell University
- Host: Kilian
- Status: Confirmed
- Title: Subpopulations of neurons in visual area V2 perform differentiation and integration operations in space and time
- Abstract: The interconnected areas of the visual system work together to find object boundaries in visual scenes. Primary visual cortex (V1) mainly extracts oriented luminance boundaries, while secondary visual cortex (V2) also detects boundaries defined by differences in texture. How the outputs of V1 neurons are combined to allow for the extraction of these more complex boundaries in V2 is as of yet unclear. To address this question, we probed the processing of orientation signals in single neurons in V1 and V2, focusing on response dynamics of neurons to patches of oriented gratings and to combinations of gratings in neighboring patches and sequential time frames. We found two kinds of response dynamics in V2, both of which are different from those of V1 neurons. While V1 neurons in general prefer one orientation, one subpopulation of V2 neurons (“transient”) shows a temporally dynamic preference, resulting in a preference for changes in orientation. The second subpopulation of V2 neurons (“sustained”) responds similarly to V1 neurons, but with a delay. The dynamics of nonlinear responses to combinations of gratings reinforce these distinctions: the dynamics enhance the preference of V1 neurons for continuous orientations, and enhance the preference of V2 transient neurons for discontinuous ones. We propose that transient neurons in V2 perform a differentiation operation on the V1 input, both spatially and temporally, while the sustained neurons perform an integration operation. We show that a simple feedforward network with delayed inhibition can account for the temporal but not for the spatial differentiation operation.
28 October 2009
- Speaker: Andrea Benucci
- Affiliation: Institute of Ophthalmology, University College London
- Host: Bruno
- Status: Confirmed
- Title: Stimulus dependence of the functional connectivity between neurons in primary visual cortex
- Abstract: It is known that visual stimuli are encoded by the concerted activity of large populations of neurons in visual cortical areas. However, it is only recently that recording techniques have been made available to study such activations from large ensembles of neurons simultaneously, with millisecond temporal precision and tens of microns spatial resolution. I will present data from voltage-sensitive dye (VSD) imaging and multi-electrode recordings (“Utah” probes) from the primary visual cortex of the cat (V1). I will discuss the relationship between two fundamental cortical maps of the visual system: the map of retinotopy and the map of orientation. Using spatially localized and full-field oriented stimuli, we studied the functional interdependency of these maps. I will describe traveling and standing waves of cortical activity and their key role as a dynamical substrate for the spatio-temporal coding of visual information. I will further discuss the properties of the spatio-temporal code in the context of continuous visual stimulation. While recording population responses to a sequence of oriented stimuli, we asked how responses to individual stimuli summate over time. We found that such rules are mostly linear, supporting the idea that spatial and temporal codes in area V1 operate largely independently. However, these linear rules of summation fail when the visual drive is removed, suggesting that the visual cortex can readily switch between a dynamical regime where either feed-forward or intra-cortical inputs determine the response properties of the network.
12 November 2009 (Thursday)
- Speaker: Song-Chun Zhu
- Affiliation: UCLA
- Host: Jimmy
- Status: Confirmed
- Title:
- Abstract:
18 November 2009
- Speaker: Dan Graham
- Affiliation: Dept. of Mathematics, Dartmouth College
- Host: Bruno
- Status: Confirmed
- Title: The Packet-Switching Brain: A Hypothesis
- Abstract: Despite great advances in our understanding of neural responses to natural stimuli, the basic structure of the neural code remains elusive. In this talk, I will describe a novel hypothesis regarding the fundamental structure of neural coding in mammals. In particular, I propose that an internet-like routing architecture (specifically packet-switching) underlies neocortical processing, and I propose means of testing this hypothesis via neural response sparseness measurements. I will synthesize a host of suggestive evidence that supports this notion and will, more generally, argue in favor of a large scale shift from the now dominant “computer metaphor,” to the “internet metaphor.” This shift is intended to spur new thinking with regard to neural coding, and its main contribution is to privilege communication over computation as the prime goal of neural systems.
16 December 2009
- Speaker: Pietro Berkes
- Affiliation: Volen Center for Complex Systems, Brandeis University
- Host: Bruno
- Status: Confirmed
- Title: Generative models of vision: from sparse coding toward structured models
- Abstract: From a computational perspective, one can think of visual perception as the problem of analyzing the light patterns detected by the retina to recover their external causes. This process requires combining the incoming sensory evidence with internal prior knowledge about general properties of visual elements and the way they interact, and can be formalized in a class of models known as causal generative models. In the first part of the talk, I will discuss the first and most established generative model, namely the sparse coding model. Sparse coding has been largely successful in showing how the main characteristics of simple cells receptive fields can be accounted for based uniquely on the statistics of natural images. I will briefly review the evidence supporting this model, and contrast it with recent data from the primary visual cortex of ferrets and rats showing that the sparseness of neural activity over development and anesthesia seems to follow trends opposite to those predicted by sparse coding. In the second part, I will argue that the generative point of view calls for models of natural images that take into account more of the structure of the visual environment. I will present a model that takes a first step in this direction by incorporating the fundamental distinction between identity and attributes of visual elements. After learning, the model mirrors several aspects of the organization of V1, and results in a novel interpretation of complex and simple cells as parallel population of cells, coding for different aspects of the visual input. Further steps toward more structured generative models might thus lead to the development of a more comprehensive account of visual processing in the visual cortex.
6 January 2010
- Speaker: Susanne Still
- Affiliation: U of Hawaii
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
20 January 2010
- Speaker: Tom Dean
- Affiliation: Google
- Host: Bruno
- Status: Confirmed
- Title: Accelerating Computer Vision and Machine Learning Algorithms with Graphics Processors
- Abstract: Graphics processors (GPUs) and massively-multi-core architectures are becoming more powerful, less costly and more energy efficient, and the related programming language issues are beginning to sort themselves out. That said most researchers don’t want to be writing code that depends on any particular architecture or parallel programming model. Linear algebra, Fourier analysis and image processing have standard libraries that are being ported to exploit SIMD parallelism in GPUs. We can depend on the massively-multiple-core machines du jour to support these libraries and on the high-performance-computing (HPC) community to do the porting for us or with us. These libraries can significantly accelerate important applications in image processing, data analysis and information retrieval. We can develop APIs and the necessary run-time support so that code relying on these libraries will run on any machine in a cluster of computers but exploit GPUs whenever available. This strategy allows us to move toward hybrid computing models that enable a wider range of opportunities for parallelism without requiring the special training of programmers or the disadvantages of developing code that depends on specialized hardware or programming models. This talk summarizes the state of the art in massively-multi-core architectures, presents experimental results that demonstrate the potential for significant performance gains in the two general areas of image processing and machine learning, provides examples of the proposed programming interface, and some more detailed experimental results on one particular problem involving video-content analysis.
27 January 2010
- Speaker: David Philiponna
- Affiliation: Paris
- Host: Bruno
- Status: Confirmed
- Title:
- Abstract:
'24 Feburary 2010
- Speaker: Gordon Pipa
- Affiliation: U Osnabrueck/MPI Frankfurt
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract:
3 March 2010
- Speaker: Gaute Einevoll
- Affiliation: UMB, Norway
- Host: Amir
- Status: Confirmed
- Title: TBA
- Abstract: TBA
4 March 2010
- Speaker: Harvey Swadlow
- Affiliation:
- Host: Fritz
- Status: Confirmed
- Title:
- Abstract: