Colloquium Details

Do ImageNet Classifiers Generalize to ImageNet?

Speaker: Ludwig Schmidt, University of California, Berkeley

Location: 60 Fifth Avenue 150

Date: March 4, 2020, 2 p.m.

Host: Michael Overton

Synopsis:

Progress on the ImageNet dataset seeded much of the excitement
around the machine learning revolution of the past decade. In this talk,
we analyze this progress in order to understand the obstacles blocking
the path towards safe, dependable, and secure machine learning.

First, we will investigate the nature and extent of overfitting on ML
benchmarks through reproducibility experiments for ImageNet and other
key datasets. Our results show that overfitting through test set re-use
is surprisingly absent, but distribution shift poses a major open
problem for reliable ML.

In the second part, we will focus on a particular robustness issue,
known as adversarial examples, and develop methods inspired by
optimization and generalization theory to address this issue. We
conclude with a large experimental study of current robustness
interventions that summarizes the main challenges going forward.

Speaker Bio:

Ludwig Schmidt is a postdoctoral researcher at UC Berkeley working
with Moritz Hardt and Ben Recht. Ludwig’s research interests revolve
around the empirical and theoretical foundations of machine learning,
often with a focus on making machine learning more reliable. Before
Berkeley, Ludwig completed his PhD at MIT under the supervision of Piotr
Indyk. Ludwig received a Google PhD fellowship, a Microsoft Simons
fellowship, a best paper award at the International Conference on
Machine Learning (ICML), and the Sprowls dissertation award from MIT.

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe