Reduce Batch Sizes
Reduce batch sizes
Introducing batches of work
Work is always executed in batches. A software release, for instance, is a batch of features delivered to the customer. But how big should a batch be?
Let’s try to find the answer to this question. But first let’s explore why batch size matters so much.
In doing this, we will be using a very helpful tool known as a U-curve optimization. We will have batch sizes over the X axis, and on the Y axis we will have the cost associated with the batch size. There are two main types of cost:
This is the cost associated with completing a batch. So, for a batch of features, the transaction cost may include things like performing user testing, or performance testing, or deployment, and so on. The transaction cost curve is diminishing as the batch gets bigger, which is totally understandable: one user testing session for 10 features is a lot less costly than a separate session for each one of the ten features.
This is the cost of not completing a batch. So, if we have created a bunch of features but instead of delivering them, we are holding on to them and working on some additional ones, planning to deliver all of them later, then we are paying the cost of missed economic opportunity of having those features delivered to the customer. But moreover, we miss the ability to learn fast and adjust our course of action. Which means that a significant part of the functionality may end up not being beneficial to the customer.
U-curve: the total cost
The total cost is the sum of transaction and holding costs. And that’s our U-shaped curve. What we want is the sweet spot on the curve where the cost is the lowest. We don’t want batches too small, but we don’t want them too big either.
Finding optimum batch size in your case
So, what would be the sweet spot in your case? Optimum batch size is very contextual. Different companies have it at different levels. Moreover, different groups in your company may easily have different sweet spots. For example, a quarterly release may be acceptable for some, but too slow for others. And you may discover what works best for you by trial and error. In complex environments that’s often the most reliable approach.
Here’s one important thing to keep in mind. You should not look at your U-curve as a given. In fact, your U-curve may change over time, and with it will change the sweet spot, either to the better or to the worse. But every smart organization tries to purposefully shift the curve in a favorable direction. Here’s how it may happen, for example. You may reduce your transaction cost. By using practices like continuous integration, automated testing and deployment, and more generically, continuous delivery and devops. And then it becomes less and less expensive to deliver a smaller batch. But then the sweet spot moves down and to the left, meaning that you will be operating more efficiently if you reduce your batch size, once the transaction cost has been lowered.
And you may have multiple different batches in play at the same time. So, for instance, you may operate with monthly or quarterly releases, while having one or two-week iterations underneath. And the U-curve for the release batch will be clearly different than the U-curve for the iteration. For instance, the transaction cost of releasing may include quite a few costly components that are not present at the iteration level.
As an action item, consider the actual batches your teams are operating with today. And see whether the batch size is truly optimal in your environment.
Donald Reinertsen, The Principles of Product Development Flow: Second Generation Lean Product Development, Celeritas Publishing; 1st edition (January 1, 2009).