Speaker: Daniel Robinson, Johns Hopkins University
Location: Warren Weaver Hall 1302
Date: Nov. 16, 2012, 9 a.m.
During this lecture I unify previous and current research to address the following question: do augmented Lagrangian (AL) methods deserve more respect for their ability to solve large scale optimization problems? To this end, I will (i) discuss the well-known strengths of AL methods; (ii) discuss their weaknesses and present recent research that aims to marginalize them, which includes a new adaptive penalty parameter updating strategy and a new block active-set quadratic programming solver; (iii) discuss how AL methods are related to other algorithms and, consequently, draw upon that additional insight; and (iv) suggest how our new techniques may likely be used to improve other closely related algorithms such as the alternating direction method of multipliers (ADMM), which is now a common distributed algorithm for solving various machine learning related problems, among others.
The main message that I hope to convey is that augmented Lagrangian methods should not be overlooked as a powerful tool for solving very large-scale optimization problems.
Co-authors: =========== Philip Gill (Professor, University of California at San Diego) Frank Curtis (Assistant Professor, Lehigh University) Hao Jiang (Ph.D. Student, Johns Hopkins University)