Colloquium Details

Convolutional Networks against the Curse of Dimensionality

Speaker: Joan Bruna, University of California at Berkeley

Location: Warren Weaver Hall 1302

Date: February 29, 2016, 11:30 a.m.

Host: Subhash Khot

Synopsis:

Convolutional Neural Networks (CNN) are a powerful class of non-linear representations that have shown through numerous supervised learning tasks their ability to extract rich information from images, speech and text, with excellent statistical generalization. These are examples of truly high-dimensional signals, in which classical statistical models suffer from the so-called curse of dimensionality, referring to their inability to generalize well unless provided with exponentially large amounts of training data.

In order to gain insight into the reasons for such success, in this talk we will start by studying statistical models defined from wavelet scattering networks, a class of CNNs where the convolutional filter banks are given by complex, multi-resolution wavelet families. As a result of this extra structure, they are provably stable and locally invariant signal representations, and yield state-of-the-art classification results on several pattern and texture recognition problems. The reasons for such success lie on their ability to preserve discriminative information while being stable with respect to high-dimensional deformations, providing a framework that partially extends to trained CNNs.

We will give conditions under which signals can be recovered from their scattering coefficients, and will introduce a family of Gibbs processes defined by a collection of scattering CNN sufficient statistics, from which one can sample image and auditory textures. Although the scattering recovery is non-convex and corresponds to a generalized phase recovery problem, gradient descent algorithms show good empirical performance and enjoy weak convergence properties. We will discuss connections with non-linear compressed sensing, applications to texture synthesis and inverse problems such as super-resolution, as well as generalizations to unsupervised learning using deep convolutional sufficient statistics.

Speaker Bio:

Joan graduated cum-laude from Universitat Politècnica de Catalunya in both Mathematics and Electrical Engineering, and he obtained an MSc in Applied Mathematics from ENS Cachan. He then became a Sr. Research Engineer in an Image Processing startup, developing real-time video processing algorithms. In 2013 he obtained his PhD in Applied Mathematics at École Polytechnique. After a postdoctoral stay at the Computer Science department of Courant Institute, NYU, he became a Postdoctoral fellow at Facebook AI Research. Since Jan 2015 he is an Assistant Professor at UC Berkeley, Statistics Department. His research interests include invariant signal representations, deep learning, stochastic processes, and its applications to computer vision.

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe