Speaker: Katya Scheinberg, Columbia University
Location: Warren Weaver Hall 1302
Date: May 7, 2010, 10 a.m.
First-order methods with favorable convergence rates have recently become a focal point of much research in the field of convex optimization. These methods have low per-iteration complexity and hence are applicable to very large scale model, such as the ones arising in signal processing, statistics and machine learning. We will discuss several convex optimization problems arising in the context of machine learning. We will show how various techniques can be used to improve the performance of first order methods while maintaining theoretical convergence rates.