Extreme Gradient Boosting (XGBoost)
XGBoost is a specific implementation of the GBoost type, using more accurate approximations to find the best tree model. It uses several improvements that make it faster and, in some cases, better performing than standard GBoost.
Most important improvements over GBoost are:
Computing second-order gradients—second partial derivatives of the loss function—which provides more information about the direction of gradients and how to get to the minimum of the loss function. While regular gradient boosting uses the loss function of a decision tree to minimize the error of the overall model, XGBoost uses the second order derivative as an approximation.
Advanced regularization—L1 (alpha) and L2 (lambda)—which improves model generalization.
With L1 (alpha) and L2 (lambda) regularization parameters, XGBoost overcomes the overfitting problem that GBoost has. This algorithm has been widely successful in many data science challenges and has gained a lot of interest among data scientists.
The hyperparameters for this model type are:
- Number of trees
- Learning rate
- L2 regularization term on weights
- L1 regularization term on weights