HOME MISSION AND RESEARCH PUBLICATIONS HISTORY PEOPLE SEMINARS COURSES VIDEO ARCHIVE CONTACT

Gerald Friedland
UC Berkeley

A Capacity Scaling Law for Artificial Neural Networks

Wednesday 06th of September 2017 at 12:00pm
560 Evans

In this talk, we derive the calculation of two critical numbers that quantify the capabilities of artificial neural networks with gating functions, such as sign, sigmoid, or rectified linear units. First, we derive the calculation of the upper limit Vapnik-Chervonenkis dimension of a network with binary output layer, which is the theoretical limit for perfect fitting of the training data. Second, we derive what we call the MacKay dimension of the network. This is a theoretical limit indicating necessary catastrophic forgetting i.e., the upper limit for most uses of the network. Our derivation of the capacity is embedded into a Shannon communication model, which allows measuring the capacities of neural networks in bits. We then compare our theoretical derivations with experiments using different network configurations and depths, diverse neural network implementations, varying activation functions, and several learning algorithms to confirm our upper bound. The result is that the capacity of a fully connected perceptron network scales strictly linear in the number of weights.

Paper: https://arxiv.org/abs/1708.06019
(video)


Join Email List

You can subscribe to our weekly seminar email list by sending an email to majordomo@lists.berkeley.edu that contains the words subscribe redwood in the body of the message.
(Note: The subject line can be arbitrary and will be ignored)