In product development we often use iterations to increase the quality and robustness of our products. Why does this work?
To begin, let me clarify my terminology. By “iteration” I mean covering the same ground twice. I do not use the term iteration to mean breaking a larger task into several smaller pieces; I call that batch size reduction. I must mention this because many people in the agile software community to use the term iteration to refer to breaking a project into a number of smaller pieces. It is a superb technique, but I consider it confusing to label it iteration.
To me, a reference point for thinking clearly about iteration is Newton’s Method, a numerical analysis technique for finding the root of an equation. In it, a calculation is repeated multiple times and the answer from each iteration is used as the basis for the next calculation. The answer gets better after each iteration. (Ignoring, for simplicity, the issue of convergence.) Newton’s Method captures the essential mechanism of iteration. We repeat substantially the same activity in order to improve our result. However, it is important to recognize, even in this case, while the form of the calculation is repeated, it is not precisely the same calculation. Each iteration uses different, and better, data.
Now let’s look at the difference between iteration and batch size using a physical analogy. Suppose, I am going to paint a wall in my house with an imperfect paint roller. I can quickly apply a single coat and then iterate by applying a second coat. Let’s say the roller, which has a small bare spot, fails to cover 2 percent of the surface area. Then, by the time I have done the second coat the missed area will be (0.02)*(0.02) or (0.0004). This quality-improving effect is a very common reason to use iteration.
Why did quality improve when we iterated? The key lies in the independence of each iteration. Because the probability of a defect for each coat was independent, the defect probabilities multiplied. If the defects in each coat were not independent, then iterating would not improve quality. For example, if there was a 3 cm concave pit in the wall, the second coat of paint would not have solved this problem — nor would the third, or the fourth.
We can contrast this iterative solution with one that uses batch size reduction. Suppose that I apply the paint in two smaller batches, painting the left half of the wall completely before I paint the right half. This will not improve my defect rate. I will still have a 2 percent defect rate over the entire surface. The quality improvement due to smaller batch size arises from a different source: feedback. For example, before you paint your entire house shocking pink, you may want to show your spouse what the color looks like on a single wall. Thus, the mechanism by which small batch size improves quality is different than that of iteration.
What is centrally important to the power of iteration is the value that is added by each iteration. If Newton’s method simply repeated the same calculation with the same data, quality would not improve. If you simply run exactly the same test, on exactly the same product, test outcomes will not be independent, and the iteration will produce no increase in quality. The power of iteration comes from how much new information is generated by each iteration.
This new information generally comes from two sources. First, when the second iteration covers something different than the first, it will generate new information. This difference may arise from a change in the test or a change in what is being tested. A second more subtle source of information arises when the performance of the system we are testing is stochastic, rather than deterministic. In such cases we can repeat exactly the same test on exactly the same product and we will still derive information from it. What information? We illuminate the probability function associated with performance. For example, assume we wanted to determine if a coin was fair or biased? Could we do this on one flip of the coin? Certainly not. Repetition is critical for understanding random behavior.
In the end, our development activities must add value to our products. It is not iteration that inherently improves quality, it depends on how efficiently the iteration is generating useful information.