Colloquium Details

Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers

Speaker: Stephen Boyd, Stanford

Location: Warren Weaver Hall 1302

Date: October 14, 2011, 11:30 a.m.

Host: Mehryar Mohri

Synopsis:

Joint work with Neal Parikh, Eric Chu, Borja Peleato, and Jon Eckstein

Problems in areas such as machine learning and dynamic optimization on a large network lead to extremely large convex optimization problems, with problem data stored in a decentralized way, and processing elements distributed across a network. We argue that the alternating direction method of multipliers is well suited to such problems. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas-Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for $\ell_1$ problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to statistical and machine learning problems such as the lasso and support vector machines, and to dynamic energy management problems arising in the smart grid.

Notes:

In-person attendance only available to those with active NYU ID cards.


How to Subscribe