I’ve posted a couple of times on reducing work in process, size of batches, and overall cycle time.

While witnessing first-hand a number of positive effects of faster iterations, the data I’ve collected is only anecdotal evidence. I’ve always regretted not to track more data over time to draw more scientific conclusions, and illustrate my point.

So I’ve done the next best thing: a simulation.

Simulating product development lifecycle

The idea is to simulate the development lifecycle of a product: develop the features that will satisfy the customer’s needs.

The ideal development path assumes that what the customer needs is exactly understood. In this imaginary world, every feature created is what the customer need, work is optimal, and so it takes minimal time to get to the end. It is represented by the gray line between the “start” and “finish” circles1.

In real life though, work is not optimal. Uncertainty is generating rework, and so correcting the direction of product development requires to iterate. This simulation aims at mimicking the imperfect path taken by a development team to complete a project.

A simulation

What the hell is that?

When the simulation starts, a new project is launched with several competing development teams. Each have their own cycle times.

At the end of each iteration, the team is checking if the direction they are taking is still correct (Scope is fine, features correspond to what the customer needs, feature is generating the expected value, user engagement increased, etc.), and re-assesses direction for the next iteration (details on methodology below).

Direction for the next iteration might be better or worse than it used to be, the team will build the next iteration in full, and when the iteration is over, decide to pivot or persevere. Rinse and repeat.

Once all the teams reach the goal (product is completed within a defined tolerance of scope variation), the simulation takes note in a tally. When all the teams are done, a new project starts.

Observations

It seems far-fetched to derive observations from a simulation such as this, but since it fits with my observations of the real world, let’s just take them as illustrations of a few points I’d like to make, which you can observe by playing with the simulation above.

  • Shorter iterations are not always faster, which is to be expected because it’s not the point. The fact that scope is re-assessed often is incurring a lot of cosine ; which when compared to straight, longer iterations might make progress seem slower.
  • The shorter the iteration, the lower the standard deviation, which is precisely the point. Short iterations are helping reducing uncertainty by correcting the direction often. Longer iterations can be much faster. But also much longer. It’s sort of a bet in terms of understanding the customer need, and betting is risky. They increase uncertainty through tunnel effect2.
  • Shorter iterations are predictable through uncertainty. With a 50% random factor, the modeled “1 day iteration” has a relatively consistent standard deviation of 5%, compared to “1 month” which has a stddev of 50% (meaning that it may take 500 day, or 750, or 250, your bet). Short iterations are safer in the sense that they’re reducing the risk of development time unpredictability.
  • Shorter iterations yield better intermediary results. Having frequent touch-points with the customer helps making sure that what is being built corresponds more to what is needed. This materializes by having the shorter iterations lines close to the gray line throughout iterations ; if a project was to be stopped at a given moment in time, it’s more likely to be more central, i.e. closer to a correct scope, with short iterations than with long iterations.

Uncertainty

Assumptions are made to operate under uncertainty. The origin of this uncertainty can be separated in two scenarios:

  1. The customer is asking you directly to build something. What they’re asking for might not be what they actually need. The context might change. Their understanding of the problem might increase with your work. You might misunderstand what the customer is asking for. All of this is unknown when the project starts, and is discovered while working on it ; uncertainty is mainly derived from communication and understanding issues.

  2. You are building a product that you hope the customer will buy3 (be it an ecommerce website, a SaaS application, or a COTS product), which means that you are trying to guess what the customer actually need. You gather some data from the product itself, from customer interviews and focus groups, from support or other people in direct contact with the customer4. In this case the uncertainty is slightly bigger, you are trying to address different people with different needs with one product. You usually have several choices, and you’re trying to make the correct one. Uncertainty is mainly derived from customer diversity and lack of data. This typically incur more uncertainty in the product development.

This uncertainty means that mistakes will be made. Features will be created that don’t correspond to the customer’s actual need. They might outright make you lose users or revenue. The scope itself might change. All of this will incur rework. The usual approach when in this situation is to iterate, and to re-assess priority and direction at each iteration.

Methodology

A project has a start and a finish. The finish accepts imperfect results that are configurable, and is placed at a pre-determined distance from the start.

Each iteration advances the work by its length ; i.e. 15 days of work yield 15 days worth of features. The caveat being that these features might not align what what is actually needed by the customer, so they might make only a cosine progress, or much like in real life, go in the wrong direction.

Since we are talking about real life, I’m assuming a normal distribution of error. I’m using a random number generation function that give results on a gaussian curve, centered on the perfect direction. Having a result perfectly aligned with the customer is the most likely outcome, but less positive (even totally antagonistic) results are possible.

Uncertainty from the customer (coming from either changes of mind, better understanding of the requirements, bad communication, lack of data, etc.) is simulated by altering the gaussian random generation function with a pseudo-random factor. At 100% understanding of your customer, you get pure normal distribution. At 50%, you get gaussian + a random angle with a .5 ponderation.

Code is available on GitHub.

Overhead

A final note: having a 1 day iteration time can seem unfeasible because of the cheer cost of overhead. It’s nothing but achievable, and in fact iterations can and should get even smaller. That’s the last benefit that I want to underline: the creation of pressure to eliminate overhead.

Getting to very short iterations cannot be achieved without a pretty much fully automated pipeline. This should be a goal, not only for the beauty of it, but also because being 100% confident of the pipeline is the only way to eliminate risks related to deployment and the question that teams should be asking themselves is: how can we do it?

Notes

  1. Note that I’m referring throughout this post to projects, start and finish to facilitate the discussion. All of this is perfectly applicable to projects with no start and no end as well. 

  2. Since you don’t validate what you’re building for a long time, maybe it’s completely wrong. 

  3. Same thing - taking shortcuts, you might buy the product or the product might sell you something, in the end the method is similar. 

  4. Whatever you build it’s still always a good idea to have direct contacts yourself with your customers regularly…