Early stopping is a technique used to prevent overfitting by halting the training process before the model starts to overfit on the training data. It monitors the model's performance on a validation set and stops training when the validation error stops improving or starts to increase.
Early stopping helps find a balance between underfitting and overfitting. By stopping at the point of optimal performance, the model avoids wasting resources and can generalize better to unseen data.
It is commonly used when training deep neural networks, where the model may have the capacity to memorize the training data, leading to poor generalization.