Colloquium Details

Test-Time Adaptation

Speaker: Amrith Setlur, Carnegie Mellon University

Location: 60 Fifth Avenue 150

Date: March 23, 2026, 2 p.m.

Host: Matus Telgarsky, Pavel Izmailov

Synopsis:

Traditional AI systems rely on models that execute a fixed set of computations at test time. But many of the hardest problems we care about require test-time adaptation: models must perform variable computation at test time, adapting what they do to the specific instance in order to make progress on hard, open-ended tasks. For example, when given a conjecture in research mathematics, a model may need to break the problem into lemmas, test intermediate claims, revise failed proof attempts, and allocate more computation to promising directions. To succeed in such settings, we need to train models to behave as effective algorithms. This is now possible because pre-trained models come with powerful priors that make test-time adaptation feasible. In this talk, I introduce a framework that casts algorithm learning as meta–reinforcement learning, providing a principled foundation for both formal analysis and the practical design of training objectives and methods for learning algorithms. I conclude by showing how these methods can train a 4B theorem-proving model that acts as an effective algorithm and outperforms models up to 30x larger.

Speaker Bio:

Amrith Setlur is a fifth and final year PhD student in the Machine Learning Department at Carnegie Mellon University. His research focuses on the foundations of test-time adaptation and on developing algorithms that enable models to continually adapt, scale computation on individual test instances, and make autonomous progress on hard problems. His work is supported by the JPMorgan AI PhD Fellowship and has been recognized through the Laude Slingshot Grant, conference spotlights at ICML and ICLR, and multiple orals, best paper awards at top ICML, ICLR, and NeurIPS workshops.

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe