Colloquium Details

Learning to Continually Adapt at Test-Time

Speaker: Amrith Setlur, Carnegie Mellon University

Location: 60 Fifth Avenue 150

Date: March 23, 2026, 2 p.m.

Host: Matus Telgarsky, Pavel Izmailov

Synopsis:

Traditional AI systems typically rely on models trained to be static functions: fixed input-output mappings that execute essentially fixed computation at test time. But many of the hardest problems we care about demand generalizable operators: models that, given a new test instance, can adapt their computation, choose what to do next, and make autonomous progress. E.g., when presented with a conjecture in research math, such a model may decompose the problem into lemmas, test intermediate claims, and revise failed proof attempts. This shift is now plausible because pre-trained foundation models come with powerful priors distilled from human data, but turning these priors into reliable, self-improving operators requires explicit training in a new learning paradigm. In this talk, I introduce a learning framework for test-time adaptation that recasts operator learning as meta-reinforcement learning. I formally characterize when and why meta-RL provides a statistically efficient route to learning operators, even relative to methods that rely on expert-curated data. I then use this framework to identify the central challenges in the setting, including objective design and exploration on hard problems, and present practical algorithmic solutions with applications to theorem-proving and web agents.

Speaker Bio:

Amrith Setlur is a fifth and final year PhD student in the Machine Learning Department at Carnegie Mellon University. His research focuses on the foundations of test-time adaptation and on developing algorithms that enable models to continually adapt, scale computation on individual test instances, and make autonomous progress on hard problems. His work is supported by the JPMorgan AI PhD Fellowship and has been recognized through the Laude Slingshot Grant, conference spotlights at ICML and ICLR, and multiple orals, best paper awards at top ICML, ICLR, and NeurIPS workshops.

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe