Mini-batch gradient descent is an optimization algorithm used to update the parameters of a machine learning model by calculating the gradient of the loss function with respect to the parameters, based on a small, randomly selected subset of training data. This approach strikes a balance between standard gradient descent, which uses the entire dataset, and stochastic gradient descent, which updates parameters using a single training example. By processing mini-batches, this method improves the convergence speed and reduces the variance in the parameter updates, making it particularly effective for large datasets.
congrats on reading the definition of mini-batch gradient descent. now let's actually learn it.