Numerical Analysis and Scientific Computing Seminar
Random-then-Greedy Procedure: From Empirical Risk Minimization to Gradient Boosting Machine
Speaker: Hai-hao (Sean) Li, Google Research and University of Chicago
Location: Warren Weaver Hall 1302
Date: Nov. 22, 2019, 10 a.m.
The gradient boosting machine (GBM) is one of the most successful supervised learning algorithms, and it has been the dominant method in many data science competitions, including Kaggle and KDDCup. In the second part of the talk, we present the Random-then-Greedy Gradient Boosting Machine (RtGBM), which lowers the cost per iteration and achieves improved performance in theory as well as practice.