Real World ML - Understanding Batch Size. Train Faster and Better Deep Learning Models

Real World ML - Understanding Batch Size. Train Faster and Better Deep Learning Models

ยท

6 min read

Have you ever spent days fine-tuning a deep learning models, only to see no difference on its performance?

A couple days ago, I was helping a friend of mine to fine tune a classifier using CNN.

We spent a couple of hours, adjusting every hyper-parameter trying to improve the accuracy. Only to be thwarted by something as seemingly simple as batch size.

This tiny yet powerful hyper-parameter can make or break your training process. Impacting on convergence speed, stability, and overall performance.

Why does it matter so much, and how can you harness its power to achieve faster, more accurate results?

In this article, I'll dive into the mysteries of batch size and practical strategies for selecting the optimal size.

I'll save you a ton of time of research and experimentation. So, youโ€™ll have a clear understanding of how to fine-tune this critical hyper-parameter.

Read more to discover! ๐Ÿ‘‡

Understanding Batch Size

Before we dive into the impact of batch size on learning curves, let's first understand what batch size means in the context of deep learning.

Batch size refers to the number of training examples used in one iteration on training process.

It determines how many samples the model processes before updating its weights and biases.

The choice of batch size can have a profound effect on the model's learning dynamics and the shape of the learning curves.

Stochastic Gradient Descent (SGD) vs. Batch Gradient Descent (BGD)

Let's explore the two extreme cases: Stochastic Gradient Descent (SGD) with a batch size of 1 and Batch Gradient Descent (BGD) with a batch size equal to the entire training set.

Stochastic Gradient Descent (Batch Size = 1)

When using a batch size of 1, known as Stochastic Gradient Descent (SGD), the model updates its weights after each individual training example.

SGD has some distinct characteristics and impacts on the learning process:

  • Noisy Gradient Estimates: Introduces a high level of noise into the gradient estimates, leading to unstable and fluctuating learning curves. This noise helps the model escape local minima, promoting better generalization.

  • Faster Convergence in Terms of Epochs: SGD often leads to faster convergence in terms of the number of epochs required. Since the model updates its weights more frequently, it can adapt quickly to new patterns in the data.

  • Risk of Underfitting: Due to the noisy updates, SGD may struggle to converge to the optimal solution. The noise can prevent the model from settling into a stable state, leading to potential underfitting.

Batch Gradient Descent (Batch Size = Entire Training Set)

On the opposite end of the spectrum is Batch Gradient Descent (BGD), where the batch size is equal to the entire training set.

BGD has its own set of characteristics:

  • Stable and Deterministic Gradient Estimates: Updates weights after processing all training examples, resulting in very stable and smooth learning curves with minimal fluctuations.

  • Slower Convergence: BGD requires computing the gradients over the entire dataset before making an update, which can be computationally expensive. As a result, convergence is typically slower compared to SGD.

  • Risk of Overfitting: Since the model sees the entire dataset before making updates, it can potentially memorize noise and specific patterns, leading to overfitting. BGD is more prone to getting stuck in local minima and may require additional regularization techniques.

  • Memory Requirements: BGD requires significant memory resources to store the entire dataset during each iteration. This can be a limitation when dealing with large datasets or limited computational resources.

Mini-Batch Stochastic Gradient Descent (Mini-Batch SGD)

In practice, the most commonly used approach is Mini-Batch Stochastic Gradient Descent (Mini-Batch SGD), where the batch size is set to a value between 1 and the entire training set.

Characteristics of Mini-Batch SGD

Mini-Batch SGD strikes a balance between the extremes of SGD and BGD by using intermediate batch sizes.

Common batch sizes include 24, 32, 128, and 512.

This approach combines the benefits of both methods, reducing the variance of updates compared to SGD while maintaining some stochasticity to escape local minima.

Mini-Batch SGD typically leads to faster and more stable convergence and requires less memory than BGD, making it a popular choice for training deep learning models.

Impact of Batch Sizes on Learning Curves

Mini-Batch SGD strikes a balance between the characteristics of SGD and BGD:

  • Reduced Variance of Updates: Compared to pure SGD, Mini-Batch SGD reduces the variance of the gradient updates by computing them over a subset of the training examples. This leads to smoother learning curves and more stable convergence.

  • Faster Convergence: Mini-Batch SGD typically converges faster than pure SGD in terms of computational efficiency. It can leverage the parallel processing capabilities of modern hardware (GPUs/TPUs) to process multiple examples simultaneously.

  • Balancing Underfitting and Overfitting: The choice of batch size in Mini-Batch SGD can influence the model's tendency to underfit or overfit:

    • Smaller batch sizes (e.g., 24, 32) introduce more noise into the gradient updates, helping to escape local minima and potentially leading to better generalization. However, they may require more iterations to converge.

    • Larger batch sizes (e.g., 128, 512) provide more stable updates and faster convergence per epoch but can increase the risk of overfitting. They may require additional regularization techniques to maintain generalization performance.

You can read more about the impact of batch sizes in this paper. It recommends 32 as a good default value.

Read more at Practical recommendations for gradient-based training of deep architectures

Recommendations and Considerations

When choosing the batch size for your deep learning model, consider the following recommendations:

  • Experiment and Tune: The optimal batch size depends on the specific problem, dataset, and available computational resources. Experiment with different batch sizes and monitor the learning curves to find the sweet spot that balances convergence speed, generalization, and computational efficiency.

  • Default Value: As a starting point, a batch size of 32 is often recommended as a good default value. This recommendation comes from the paper "Practical recommendations for gradient-based training of deep architectures" by Yoshua Bengio.

  • Regularization Techniques: When using larger batch sizes, it's crucial to employ regularization techniques such as dropout, weight decay, or early stopping to prevent overfitting and maintain generalization performance.

  • Hardware Considerations: Consider the available computational resources and memory limitations when selecting the batch size. Larger batch sizes can benefit from parallel processing but require more memory. Strike a balance based on your hardware setup.

Conclusion

The batch size plays a critical role in shaping the learning curve, convergence behavior, and generalization ability of deep learning models.

Understanding the relationship between batch size and the accuracy/loss in the learning curve is essential for optimizing the training process and achieving the desired generalization ability.

Online SGD (batch_size=1) is less likely to overfit but may underfit, while batch gradient descent (batch_size=entire training set) is more prone to overfitting.

Mini-Batch SGD with appropriate batch sizes can strike a balance, helping to achieve good generalization performance.

The optimal batch size depends on the specific problem, dataset, and available computational resources.

Experimentation and hyperparameter tuning are essential to find the best batch size, minimizing underfitting and overfitting while ensuring efficient training.

Regularization techniques should be employed, especially with larger batch sizes, to maintain generalization performance.

By carefully considering the batch size and its impact on learning curves, you can optimize your deep learning models and unlock their full potential in solving complex problems and making accurate predictions.

If you like this article, share it with others โ™ป๏ธ

Would help a lot โค๏ธ

And feel free to follow me for articles more like this.

ย