Early stopping is a regularization technique used in supervised learning to prevent overfitting by halting the training process when the model's performance on a validation set starts to degrade. This approach ensures that the model generalizes well to unseen data instead of merely memorizing the training dataset. By monitoring the validation loss or accuracy during training, one can identify the optimal point to stop, thus striking a balance between model complexity and predictive performance.
congrats on reading the definition of early stopping. now let's actually learn it.