Upamanyu Madhow
Department of Electrical and Computer Engineering, UCSB
Computational experiments with two neuro-inspired abstractions: Hebbian learning and spike timing information
Wednesday 14th of June 2017 at 12:00pm
560 EVans
In this talk, we discuss early work on two different neuro-inspired computational abstractions. In the first, we investigate flavors of competitive Hebbian learning for bottom-up training of deep convolutional neural networks. The resulting sparse neural codes are competitive with layered autoencoders on standard image datasets. Unlike standard training based on optimizing a cost function, our approach is based on directly recruiting and pruning neurons to promote desirable properties like sparsity and distributed information representation. In the second, we consider a minimalistic model for exploring the information carried by spike timing, using a reservoir model for encoding input patterns into sparse neural codes by exploiting variations in axonal delays. Our model enables translation of polychronous groups identified by Izhikevich into codewords on which standard vector operations can be performed. For an appropriate choice of parameters, the distance properties of the code are similar to those for good random codes, which indicates that the approach may provide a robust memory for timing patterns.
Join Email List
You can subscribe to our weekly seminar email list by sending an email to
majordomo@lists.berkeley.edu that contains the words
subscribe redwood in the body of the message.
(Note: The subject line can be arbitrary and will be ignored)