days
-1
-9
hours
0
-2
minutes
-1
-1
seconds
0
-2
search
#NoEstimates

Applied capacity planning

Tomas Rybing
Statistics image via Shutterstock

Another addition to the #NoEstimates debate, Tomas Rybing looks at measuring the capacity of teams without effort estimation, in a process easy to remember. It just so happens to be faster and more accurate than estimating.

Some of you might have followed my #NoEstimates journey, if not, you can find my previous blog posts here. Earlier I wrote one called “Poor man’s project forecasting“. It became popular and this is a follow-up. Here is another example how we work with applied capacity planning for a project.

Introduction

When doing a project in the software development industry there is usually some constraints in time and cost (things need to be completed to a certain date and within a budget). Roughly two fundamentally different approaches can be taken to handle a project:

  • Waterfall – Everything is analyzed and designed up front. Effort estimates are performed that goes into a project plan that is used to manage the project towards the target release date. This will work if you can know everything up front (i.e., before you start developing). However the most common scenario is the world of software development is that things change (the customer changes their mind, the software changes, people may quit your company and are replaced with persons that needs to learn your product etc.).
  • Agile – The most agile way would be to find the function/epic/story of highest value to the customer and develop that one from start to finish. Then continue with the second most important, and so on until the target date arrives and you can see how much that was completed in the given time frame. Using this approach you can be very flexible to change, but the question “What functionality will I receive from the project?”, remains unanswered until the end.

Is there a way in between? To be able to do project planning but without doing effort estimation. If you are measuring the capacity of your team, like explained below, you can use the following steps:

  1. Analyze – You need to analyze your collected data to be able to use it.
  2. Collect metrics – You need to collect the metrics used for forecasting.
  3. Time – Time is usually a delimiting factor in a project (when the project shall be finished).
  4. Forecast – Make a forecast using the collected data given the time constraint.
  5. Assumptions – Some assumptions need to be fulfilled to be able to plan like this.
  6. Similarities – Use the forecast and compare with other known references.
  7. Think  Given the gathered knowledge, think and judge if the project is feasible, if not, take proper actions.

Hmm, take the first letter in every word and that reads out as ACT FAST.

The graph for the team

acp-throughput_graph_2

This graph shows completed stickies per week for one of our team. Every Friday the captain (team leader) counts all the stickies that are in the “Done” column on the Kanban board, and writes the total sum on the board. This is repeated every Friday. Each sticky corresponds in our Kanban to a bug fix or a sub-task of a story/feature. Hence, the measuring is done on the lowest level of work that we track.

1. Analyze

To be able to use the data shown in the graph for further planning, it has to be analyzed i.e. we must understand and be able to explain the tops and bottoms. You can see that the capacity (throughput of completed tasks) is oscillating around an average/median value, with a few tops and bottoms:

  • A dip came week 5-6 and another one week 20-21. The explanations to those are that it is “hard” to get going again after some vacation (in Sweden almost everybody has holidays over Christmas and New Year) and after a release of a product (last version of the product was released week 19).
  • The first peak (31 completed stickies) was due to several weeks of work being counted for that week, and the second peak (30) was the last week that completed and finished the release of the product/project. This is good information that we can use later on in the forecasting!
  • A negative trend (values going down from past week) is “repaired” is all occurrences except two (22->17->14 and 30->14->7). I guess this is human behavior, and a positive side effect of measuring, that the team wants to “cover up” for a bad week. It can also be a sign of normal fluctuation.
  • To me, the graph represents a fairly stable system. A system that we can make predictions on for the future.

Takeaway from this step: You must understand and be able to explain your collected data, otherwise it’s of no use.

2. Collect metrics

In the same spread sheet program used for the graph, the following calculations can be made:

acp-throughput_values

Comments on the different metrics:

  • MAX – A peak week for the team. Very nice indeed, but we don’t use wishful thinking in our forecasts.
  • MIN  A rock bottom for the team. For the same reasons as above, we don’t use this value either.
  • AVERAGE – The average value considering all weeks.
  • MEDIAN – The median value considering all weeks.
  • AVG DEVIATION – The average deviation from the AVERAGE.
  • MAX (DEVIATION) – Is AVERAGE + AVG DEVIATION, to give a MAX value with deviation considered.
  • MIN (DEVIATION) – Is AVERAGE – AVG DEVIATION, to give a MIN value with deviation considered.

Some of these metrics are used for forecasting (see below).

Takeaway from this step: Keeping track and having updated metrics should be fairly easy using a digital tool (like a spreadsheet program). There should be no effort to collect these metrics.

3. Time

The finish date is set in advance for a project (otherwise you can’t really call it a project). “We need to be ready by summer” or “this is the release for Q3”. In the time plan for our project we had a development phase of 17 weeks and a bug fixing/hardening phase of six weeks (I know this isn’t very “agile”, we are working to shorten and eliminate this phase). I separate the development phase into three sub phases:

  • “Start up” – Four weeks, naturally it is hard to get going with a new release, and it is started right after the vacation period, then it takes some time to get up to speed again.
  • “Steady state” – Nine weeks, team is up to speed and working in normal operation.
  • “Peak” – Four weeks, a final push to get the release ready and the capacity will be higher than normal.

Takeaway from this step: It’s easy to see time as something that doesn’t vary. But if you can see variations in time (like above), you can make better plans.

4. Forecast

Time to do the forecast. Considering the information from the analysis and time plan above:

  • “Start up” – 12 tasks/week, corresponds to the MIN (DEVIATION) value.
  • “Steady state” – 18 tasks/week, corresponds both to the AVERAGE and MEDIAN values.
  • “Peak” – 23 tasks/week, corresponds to the MAX (DEVIATION) value.

During the bug fixing/hardening phase we assume normal capacity, 18 tasks/week.

Tasks that can be done during the development phase = (4 x 12) + ( 9 x 18) + (4 x 23) =  48 + 162 + 92 = 302 tasks.

Bugs that can be solved during the bug fixing phase = 6 x 18 = 108 bugs.

In total = 302 + 108 = 410 tasks/bugs.

Takeaway from this step: It’s better to be conservative in the forecast than offensive (better safe than sorry, right?). After all, if you use wishful thinking and WAG (Wild Ass Guesses) you wouldn’t be reading this I presume :) 

5. Assumptions

For the forecast to hold, the following assumptions are needed:

  1. Team stays fairly intact during the time of the project. If team members are moved out of the team the capacity will be lower. If new team members are added, the capacity can increase but not immediately, since the old team members need to educate the new ones.
  2. The challenges the team will be facing is roughly the same as in the previous project. For product development (that we are working with) this can be said to be true, since the next version of the product will add-on to the previous version. If radical changes are to be made in the project this has to be taken into consideration when planning.
  3. A stable system. If you have 1) and 2) you should have a system that is predictable, i.e., forecasts can be made for the future using previously gathered data.

Takeaway from this step: The assumptions must be true during the whole project, if not, you need to plan the project in some other way.

6. Similarities

I can hear you thinking “This doesn’t really say anything!”, and that is completely true of course. We need to compare the forecast with similar things we already know of, like already completed projects. A previous project had 537 tasks/bugs solved in our ticket control system.

This project could deliver 410 / 537 ≈ 75% of the scope, compared to last project.

Ok, maybe that doesn’t say so much either. Let’s compare with some functions (some call these epics) and stories from last project:

  • “Function A” (Core changes), 33 tasks in our ticket control system
  • “Function B” (New interface), 26 tasks
  • “Story C” (GUI improvements), 5 tasks

In this project we can manage:

  • 302 / 33 ≈ 9 epics ‘similar’ to “Function A”, or
  • 302 / 26 ≈ 11 epics ‘similar’ to “Function B”, or
  • 302 / 5 ≈ 60 stories ‘similar’ to “Story C”

Of course, in reality the project will consist of a mixture of the above. However, this information should be enough for the next step.

Takeaway from this step: Just by looking (back) at stuff that you already know of i.e., completed projects, you can be able to see similarities in what you are trying to achieve in the new project.

7. Think

Now it is time to do some thinking and judgement of the project, if the scope we want to do is possible at all, given the constraints (in time and staffing). I haven’t been in a situation where the scope is less than what we can handle, i.e. room for more stories to be added. Usually you want to do more than you can achieve. How to handle that?  You need to consult The Iron Triangle:

  • Scope – Can the scope be reduced? Can some epics wait to the next project/release of the product? Maybe there are stories with lower priority (“nice to haves”) that can be skipped, etc.
  • Time – Can we extend the time plan? We want all the functions in the project and we can delay (if it’s possible of course).
  • Cost – Can more members be added to the team? Maybe have several teams? For small companies this option may not be possible, but for larger companies that can rearrange, it may be an option.

For our project we called all stakeholders to a second planning meeting where we discussed:

  1. Priority between epics/functions (to know what to start working on)
  2. Priority within epics/functions (this story is a must, this story is not needed, this story can wait to next release etc.)

Takeaway from this step: You should now, before the project has even started, have a “feeling” of the outcome of the project. Are there margins? Will it be tight? Etc. If you think of this now already, you can be one step ahead during the whole project and be able to act on the stuff that pops up, rather than react afterwards, which is the usual scenario. Of course you need to work with The Iron Triangle continuously throughout the whole project. Good luck!

Summary

Why on earth should you use this method instead of making effort estimates of all tasks (broken down from stories) that need to be done in the project? Because it’s much faster and accurate for planning. The applied capacity planning mentioned in this blog post took me only two hours to do, that is ACT FAST!

This blog post originally appeared on The Agileist, Tomas Rybing’s blog about Lean, Agile and Management.

Author
Tomas Rybing
Tomas lives in Stockholm, Sweden and has been working in IT since 1996, starting as a consultant and programmer. From 2007 his focus has switched to team leading, project leading, product management and development methods. He's a big fan of penguins and pyramids.

Leave a Reply

Be the First to Comment!

avatar
400
  Subscribe  
Notify of