Colloquium Details

Foundations of Deep Learning: Optimization and Representation Learning

Speaker: Alexandru Damian, Princeton University

Location: 60 Fifth Avenue 150

Date: March 4, 2025, 2 p.m.

Host: Joan Bruna

Synopsis:

Deep learning's success stems from the ability of neural networks to automatically discover meaningful representations from raw data. In this talk, I will describe some recent insights into how optimization enables this learning process. First, I will explore how gradient descent enables neural networks to adapt to low-dimensional structure in the data, and how these ideas naturally extend to  understanding the emergence of in-context learning in transformers. I will then discuss my work toward a predictive theory of deep learning optimization that characterizes how different optimizers navigate deep learning loss landscapes and how these different behaviors affect training efficiency, stability, and generalization.

Note: In-person attendance only available to those with active NYU ID cards.

Speaker Bio:

 

Alex Damian is a fifth-year Ph.D. student in the Program for Applied and Computational Mathematics (PACM) at Princeton University, advised by Jason Lee. His research is focused on deep learning theory with an emphasis on optimization and representation learning. His work has been supported by an NSF Graduate Research Fellowship and a Jane Street Graduate Research Fellowship.
 


How to Subscribe