Computer Science NASC Seminar
The Search for an Efficient and Robust Solver for Large-scale Nonlinear Optimization
Daniel Robinson, Johns Hopkins University
November 16, 2012
Warren Weaver Hall, Room 1302
251 Mercer Street
New York, NY, 10012-1110
Fall 2012 NASC Seminars Calendar
During this lecture I unify previous and current research to address the following question: do augmented Lagrangian (AL) methods deserve more respect for their ability to solve large scale optimization problems? To this end, I will (i) discuss the well-known strengths of AL methods; (ii) discuss their weaknesses and present recent research that aims to marginalize them, which includes a new adaptive penalty parameter updating strategy and a new block active-set quadratic programming solver; (iii) discuss how AL methods are related to other algorithms and, consequently, draw upon that additional insight; and (iv) suggest how our new techniques may likely be used to improve other closely related algorithms such as the alternating direction method of multipliers (ADMM), which is now a common distributed algorithm for solving various machine learning related problems, among others.
The main message that I hope to convey is that augmented Lagrangian methods should not be overlooked as a powerful tool for solving very large-scale optimization problems.
Philip Gill (Professor, University of California at San Diego)
Frank Curtis (Assistant Professor, Lehigh University)
Hao Jiang (Ph.D. Student, Johns Hopkins University)