Speaker: Greg Durrett, University of California at Berkeley
Location: Warren Weaver Hall 1302
Date: March 11, 2016, 11:30 a.m.
Host: Subhash Khot
One reason analyzing text is hard is that it involves dealing with deeply entangled linguistic variables: objects like syntactic structures, semantic types, and discourse relations depend on one another in complex ways. Our work tackles these interactions directly using joint modeling. This type of model structure allows us to combine component models for each analysis subtask and pass information between them, both reconciling and reinforcing the components' predictions. Joint models can also capture the dual discrete and continuous nature of language, specifically by integrating neural networks, which process continuous signals, with discrete structured models. We describe state-of-the-art systems for a range of established NLP tasks, including syntactic parsing, entity resolution, and document summarization.
Greg is a Ph.D. candidate at UC Berkeley working on natural language processing with Dan Klein. He is interested in building structured machine learning models for a wide variety of language understanding problems and downstream NLP applications. His work combines two broad thrusts: first, designing joint models that combine information across different tasks or different views of a problem, and second, building systems that strike a balance between being linguistically motivated and data driven.
Refreshments will be offered starting 15 minutes prior to the scheduled start of the talk.