Colloquium Details
Specializing LLMs for Reliability
Speaker: Greg Durrett
Location: 60 Fifth Avenue 150
Date: March 3, 2025, 2 p.m.
Host: He He
Synopsis:
Abstract: Large language models (LLMs) have advanced the frontiers of AI reasoning: they can synthesize information from multiple sources, derive new conclusions, and explain those conclusions to their users. However, LLMs do not do this reliably. They hallucinate facts, convincingly state incorrect deductions, and exhibit logical fallacies like confirmation bias. In this talk, I will describe my lab's work on making LLM systems reliable by introspecting their behavior. First, I will argue that automating fine-grained evaluation of LLM output provides a level of understanding necessary for further progress. I will describe the ingredients of effective automated evaluators and a state-of-the-art factuality evaluation system, MiniCheck, showing that analyzing the nature of hallucinations can help reduce them. Second, I will demonstrate that better understanding of LLMs' internal reasoning processes helps us train them to be more reliable. Our work shows that model interpretation techniques can advance training methodology and dataset curation for reasoning models. Finally, I will describe how deeper understanding of LLMs will let us tackle their most fundamental limitations, such as their inconsistency when given different inputs. I will propose how these pieces might soon be combined to form reliable AI systems.
Zoom: https://nyu.zoom.us/j/99562069881
Note: In-person attendance only available to those with active NYU ID cards.