Skip to content

Gradient Boosting (GBoost)

Gradient boosting, or GBoost, a machine learning technique used in classification tasks, is an ensemble of decision tree models. Gradient boosting works by stacking decision tree models sequentially, where each model tries to predict the error left over by the previous model.

Boosting technique focuses on sequentially adding base estimators and filtering out the observations that the learner gets correct at every step. Therefore, on each step, the next estimator added to the ensemble will be trained only on the residual, incorrect observations. Finally, the model aggregates the result of each step, achieving a stronger model than its base estimators.

Similar to how neural networks utilize gradient descent to optimize weights, here the gradient is used to minimize a loss function. During each training round, a decision tree is added, and its predictions are compared to the correct targets.

The difference between prediction and true target represents the error rate of the model.

Those errors can then be used to calculate the gradient—the partial derivative of the selected loss function, which describes the steepness of our error function.

In Gradient Boosting, the gradients are added to the ongoing training process by fitting the next tree to these values.

Models of this type tend to perform better than Random Forest but need to be tuned properly. They perform well on unbalanced datasets while they tend to overfit on noisy data and with a high number of estimators.