XGBoost Hyperparameters: Early Stopping
Learn how early stopping can reduce the overfitting of random forest model trained with XGBoost.
We'll cover the following
Early stopping as a method for reducing overfitting
When training ensembles of decision trees with XGBoost, there are many options available for reducing overfitting and leveraging the bias-variance trade-off. Early stopping is a simple one of these and can help provide an automated answer to the question “How many boosting rounds are needed?” It’s important to note that early stopping relies on having a separate validation set of data, aside from the training set. However, this validation set will actually be used during the model training process, so it does not qualify as “unseen” data that was held out from model training, similar to how we used validation sets in cross-validation to select model hyperparameters in the chapter “The Bias-Variance Trade-Off.”
When XGBoost is training successive decision trees to reduce error on the training set, it’s possible that adding more and more trees to the ensemble will provide increasingly better fits to the training data, but start to cause lower performance on held-out data. To avoid this, we can use a validation set, also called an evaluation set or eval_set
by XGBoost. The evaluation set will be supplied as a list of tuples of features and their corresponding response variables. Whichever tuple comes last in this list will be the one that is used for early stopping. We want this to be the validation set because the training data will be used to fit the model and can’t provide an estimate of out-of-sample generalization:
eval_set = [(X_train, y_train), (X_val, y_val)]
Now we can fit the model again, but this time we supply the eval_set
keyword argument with the evaluation set we just created. At this point, the eval_metric
of auc
becomes important. This means that after each boosting round, before training another decision tree, the area under the ROC curve will be evaluated on all the datasets supplied with eval_set
. Because we’ll indicate verbosity=True
, we’ll get output printed below the cell with the ROC AUC for both the training set and the validation set. This provides a nice live look at how model performance changes on the training and validation data as more boosting rounds are trained.
Get hands-on with 1200+ tech skills courses.