Launch Tomorrow

Landing Pages for your Lean Startup

  • Free Tools
  • About
  • Members
  • Corporate Innovation
  • Blog

How to analyze the impact of velocity on your release date

July 5, 2019 by LaunchTomorrow Leave a Comment

While “agilefall” has many well documented downsides, I’ve found a counterintuitive bright side to the following aspect of it: “We have a product backlog with priorities, but we start working on a release with a long list of features already committed to the business.”
In this case, we have a pretty detailed view of scope as well as developer estimates of complexity. There is a lot of data to crunch.
From a pure business perspective, the key variable that matters is the release date. Dates have a significant implication for the rollout across the company and existing client base. Here’s a few business reasons why:
  • marketing and sales plans need to be made (since they’re separate “workstreams”)
  • overall costs and ROI of the program changes
  • coordination and resources need to be shared/reassigned with other internal teams
And most importantly, the date is completely non-technical. Anyone who graduated from primary school can understand it. So the nice thing is that the release dates help focus the discussion on what matters, while only needing to go into a minimal amount of technical detail.
Making the relatively liberal assumption that scope will not change, it’s possible to generate a pretty decent forecast of the date.
Moreover, it’s possible to model the impact of a higher or lower average future velocity on release dates. In other words, if there is a need for a plan, you can make the entire plan using the following approach:
  • dependent variables: incremental release dates
  • independent variable: hypothetical velocity
And then look at what happens to the release dates as you vary velocity.
unsplash image feeefeedbedb
Photographer: Waldemar Brandt | Source: Unsplash
The entire plan fees like an accordion which can be pulled out or pushed in, depending on your assumptions about velocity. If velocity is high, the releases are close together. If it’s low, they’re quite spread apart.
Here’s an example of a quick model I did by exporting my data from Jira and using Excel’s scenario manager:
velocitysensitivityanalysis ebeebaccdfcbcc
Type caption (optional)
 
By varying that one number and holding scope & team constant, the dates which are hit for each release change significantly. Let’s get into how to do it for yourself first.

Case Study

We’ve defined, broken out and even estimated scope for this program. It contains hundreds of stories (tasks really) related to the delivery of a big infrastructure. All of this is being tracked using a standard agile tool called Jira, with relevant details attached to each story.
Here’s the type of analysis we’ll need to do as an interim step:
versions ecbcfaacfac
Type caption (optional)
 
The unit of effort measurement here is the “Story Point“, an abstract measure of relative complexity of a piece of work. This is quite common in agile circles, to help give the delivery team a safe way to discuss and size the work, without biasing it towards what stakeholders “want to hear” and to help them “save face” if a complex task takes a long time (even though it wouldn’t necessarily be obvious to a non-technical person).
The key “number behind the number” here is the velocity. It’s expressed as the story points completed per unit of time. In this case: one week. Velocity tracks increments of fully complete work that has been specified, developed, checked, and signed off.
If we estimate the story points for each task up front, it’s possible to observe velocity as the team works on the scope. Basically, you see performance as the change in that number over time.
On this project, we are tracking velocity in weekly increments. This gives us more data points and increases our confidence in the numbers faster than if we were calculating it on a monthly basis for example. More observations, higher confidence–statistically speaking.
In the example above, the actual team doing the work has been delivering at a rate of 27 story points per week. But what if the average future velocity was higher or lower than that?

Step 1: Pull out the scope sizes from Jira

For me, the fastest way to get to the real numbers is to pull up reports. At the moment, most of the modelling we are doing is based on the Jira versions (fixVersion). Previously, I’ve used Jira epics using different reports.
reports fdcafaaccead
reports on left sidebar
Then choose the version or epic report:
versionreport eedbcafeaaada
here in a drop-down
And finally pull out the actual story points completed in the version:
completedstorypoints feefcbbeeffe
Type caption (optional)
 
And incomplete:
incompleteSP dfebbcbfbfde
Type caption (optional)
 
And finally the number (count) of unestimated stories, as delivery teams tend to push back on spending too much time estimating rather than doing the work:
unestimatedissues cdceebaeecdaddcc
Type caption (optional)
 
Remember to exclude any known bugs as these will typically have 0 story points anyway, and use your judgement for other issues.
Type these into Excel (yay data entry!):
inexcel step befadefad
typing in numbers from that report

Step 2: Create a key assumptions section

Hah! The ‘A’ word.
In your spreadsheet, put in an assumed value for the unestimated stories as the average size (in story points) of issues and the expected velocity for your delivery teams:
aumptions afbacaafcfcde
Type caption (optional)
 

Step 3: Fill out the remaining formulas to estimate your release date

Essentially, once we have the estimated total scope and velocity, it’s just a matter of dividing the former by the latter, to generate the number of weeks of work remaining:
firstversionestimate dfffdbddbadb
Type caption (optional)
 
Here are the actual excel formulas I used to generate the magic date:
firstversionestimate formulas cffadbfeaacfafdfb
Type caption (optional)
 

Step 4: Rinse and repeat for remaining versions on your backlog

Essentially, you are holding the team, scope, and velocity constant and using your estimates to figure out when you will be ready to release each version. This is, of course, based on what you know now, which is subject to change. 🙂
versions bfdbccacbaaaa
Type caption (optional)
 
The formulas are more or less the same throughout the versions. In this case, I’m assuming the team will finish version 1 and move immediately to working on version 2, so column I needed some adjustment.
versions formulas eccfbcdeceddd
Type caption (optional)
 

Step 5: Kick up the What If analysis using Excel’s scenario planner

First move your cursor to the variable you want to model, in this case your velocity assumption:
movecursor ecdbcaccdadcf
Type caption (optional)
 
This is the “independent” variable in statistical terms that you want to vary, in order to see how it affect the dependent variables you care about: the completion dates.
whatifanalysis dddadefdc
Type caption (optional)
 
This is probably one of Excel’s most powerful non-quantitative easter eggs. While allowing you to vary individual values, you can extrapolate impacts that reach far beyond what is possible (for most humans) intuitively.
selectscenariomanager afdbfdadeffacaaefcb
Type caption (optional)
 
Within there, choose the Scenario Manager, and the following comes up:
blankscenariomanager defdffc
Type caption (optional)
 
Click add, and then use the cell you selected previously as the main variable to vary:
baselineveloctity ddcbeafeed
Type caption (optional)
 
You can call the scenario whatever you want that has business meaning to you and your company. Then enter a value for this scenario in the next prompt:
enterscenariovalue ceafaba
Type caption (optional)
 
And then do this again for a few other scenarios. In this case, I created a higher velocity at 60 and a lower velocity at 20. This range is so wide, simply because I have no data to go on, so I’d like to explore the optimistic and the pessimistic scenarios.
When you have all of this done, and the scenario manager is correctly set up, click on summary:
scenarios bdaacdfdb
Type caption (optional)
 
Based on that, Excel asks you which cells in the sheet you actually care about when you are varying scenarios (the dependent variables):
dependentvariables dbcebddbddc
Type caption (optional)
 
And after you hit OK, Excel hangs for a bit and returns with the following magical table:
scenariosummary bfcfdabfedefb
Type caption (optional)
 
This gives you a sense of how different average velocities will affect your delivery dates and plan. It gives you a qualitative feel for what would happen anywhere within your range, too, by looking at the outer boundaries. It’s enough for you to take and present to stakeholders in terms of what the velocity number actually means, what tradeoffs might need to be adjusted, what resources added, and all of the other things which might need to be changed to influence velocity.
Of course, you’d want to make it a bit prettier and clarify what these things mean:
velocitysensitivityanalysis aacdeedcbdeb
Type caption (optional)
 
Ok, now you’re talkin’! So we are finally at the velocity accordion state. Depending on what velocity we end up getting out of the team, these are the dates we expect to hit for each incremental release. As you can see, the difference between doing the entire scope at a velocity of 20 vs 60 is over an entire year out. Even if all of the details of what is delivered are exactly the same, done in exactly the same order, by exactly the same people, there more than an entire year of difference in the final release.
In other words, detailed planning this far into the future should not include commitments to dates–even if you do have your entire backlog defined and specified and you’re hoping it’s “accurate”.
Figuring out your velocity and how to increase it has much greater business value (bringing release dates in) rather than trying to specify and estimate everything up front to the utmost detail.
This is particularly true in a corporate context with large budgets and a need for “saving face” in case the product isn’t fully perfect before it’s even discussed with prospects or existing customers. It’s much better to commit to small increments of work that are then tied to specific customers needs, as that is the shortest path to revenue in an enterprise sales environment. On top of that, focus on managing velocity and not detailed planning, as that will get your product out there faster. Ultimately your customers could care less how detailed your release plan is; they just want a product that addresses their needs.

Keep following the story

I’ve written a book on speeding up innovation. Check it out over here.
<< Help Yo' Friends

Filed Under: velocity

The top 10 “rules” of adaptive innovation management

June 21, 2019 by LaunchTomorrow Leave a Comment

Adaptive innovation focuses on building and releasing new products in high levels of uncertainty, and optimizing resources as you go. Not just up front at the planning stage. Studies of high growth Inc 500 startups have confirmed that firms adapting to market needs achieve growth more frequently, compared to companies sticking to an original vision and plan. Established companies looking to innovate, would be wise to allow for such flexibility as they expand their product portfolio. Corporate innovation shares the same market context with startups when launching new products.

Diving deeper into why high growth startups  were successful will help increase the chances of succeeding as innovators. In contrast to startups, larger firms have access to much greater resources, although they struggle with the complexity this causes. Moreover, their starting point is often one of being cost efficient. This can lead to challenges in the early stage. Evolving products requires multiple rounds of financing, in order to minimize risk rather than using traditional budgeting to help new products succeed.

The following observations are based on my innovation experience in a B2B software context. While this is a very particular type of environment, with high technology and quality constraints but no manufacturing costs, the principles can be cross-pollinated into other environments. Also it’s worth noting that software has the ability to be packaged as products (MS Office) or as services (Google Suite). In short, the model and its insights can be applied to other types of new product development.

*NgjBtEotLNq
Photographer: Markus Spiske |

1. In practice, the full cost of time is dictated by sales and external market conditions. This is because you are nearly 100% certain not to make a sales forecast, if the relevant and sufficient scope for a customer does not exist yet. This holds regardless of your confidence in your sales forecasts and estimates.

2. Any planned internal costs will always be lower than cost of lost sales. They have to be over the longer term, otherwise you wouldn’t have a viable business.

3. You can use linear approximations as an adaptive method to quantitatively arrive at a current “cost of time” to help drive optimal decision making. This is done by comparing the effect on sales if the planned scope was available immediately vs if it exists on the estimated delivery date. Based on this you can derive a cost of time.You probably won’t have the luxury of directly observable continuous functions. In effect, you’re starting to think in terms of calculus and deltas, as opposed to solving for the absolute values in algebra.you move from “overall cost” to d (cost) / dt.

 
*gDpvsDvLXBabt

4. Costs and Revenues follow different probability distributions. and these distributions are skewed costs:

  • inflexible especially in the case of legacy, large, or infrastructure
  • relatively certain once agreed (near 100%)
  • fixed in advance in accordance with high level strategy
  • once agreed, they tend to stick around for a while due to the sunk cost fallacy

Revenues:

  • Highly variable
  • Depend on how well the immediately available scope matches to the customer problem
  • Can make a big $ sale with a small incremental add of $ (in scope terms) — true even for a new product, there is a 20% of functionality that covers 80% of the value
px Comparison mean median mode svg dadfbaaaafddfbabedab

5. The aggregate “work time” to build something large, e.g. infrastructure, will be unknowable up front, yet it is relatively fixed. The number of people you add is just a divisor. This fixed total will roughly correlate with the aggregate cash flows required to build something. Cash flow variation will come from who you hire, where you hire, and how desperate you are.

I don’t experiment with my drone enough, but taking it out a couple weeks ago after letting it collect dust was a good decision.

Photographer: Jared Murray | Source: Unsplash

6. Elapsed time to build something large can vary enormously, based on how the work is chopped up and distributed. The business cost of this elapsed time is primarily by the business cost of time, not by the product development team’s pace and cumulative salaries.

clock alarm

Photographer: Sonja Langford | Source: Unsplash

7. The way in which the work can be distributed is significantly affected by the nature and structure of the work itself. For example, adding a fourth developer to work on a specific chunk of code will provide less output than adding the first one. They will step on each others’ toes and generally get frustrated. Best to have the developers self-organise based on the discovered architecture.

Shot with @expeditionxdrone

Photographer: chuttersnap | Source: Unsplash

8. There is also natural organisational upper bound to adding people, as per the old software industry aphorism that “9 mothers are unable to deliver a baby in 1 month”.9. Onboarding new team members will vary based on both the quality of the people and the ability of your organisation to do so. From soft factors like cultural fit, to harder factors like the existence of accurate and up to date documentation, to the raw curiosity the person has in your business and product.

10. As useful as they are, numerical optimizations are only part of the whole picture. You still need to create teams, delegate work properly, and set healthy boundaries for accountability. The numbers don’t absolve you of good sense managerial practices.

Soccer table game Netherlands

Photographer: Pascal Swier | Source: Unsplash

There you have it. These form a baseline of the thought process required to truly optimize delivery of new products, particularly for established companies, using adaptive innovation.

Keep up with me by signing up over here for updates about my upcoming book on adaptive innovation.

<< Help Yo' Friends

Filed Under: metrics, velocity Tagged With: adaptive innovation management, quantitative product management

Why delivery velocity is a broken metric

June 14, 2019 by LaunchTomorrow Leave a Comment

“For by the ultimate velocity is meant that, with which the body is moved, neither before it arrives at its last place, when the motion ceases nor after but at the very instant when it arrives… the ultimate ratio of evanescent quantities is to be understood, the ratio of quantities not before they vanish, not after, but with which they vanish” –Principia Mathematica by Isaac Newton

I’m not a mathematician, but I’m not afraid of numbers. I’ve drifted through the standard high school and college math requirements, admittedly with curiosity. And I’ve hung around the hedge fund industry, software engineers, and actually trained mathematicians for long enough that math’s kind of rubbed off on me. Yet my skillset is probably best described as a mix of product and, more recently, delivery management with an academic foundation in accounting.

Over time, I’ve noticed that cost accounting and traditional/waterfall project management operate primarily on absolute values assuming they shouldn’t change. Especially with respect to budgets and time. In fact, variance or change is almost a dirty word in that context. Today’s markets, especially in the context of new product development, are different from the heyday of cost accounting in the 1950s. Back then, you could easily assume stable incremental growth in demand in many markets for 15-20 years. And actually be right. Usually you wanted to establish a budget to control unnecessary costs, because everything else was likely to stay the same.

px Finite difference method svg cffefecaa

Most choices in a big company don’t have the same implied cost of time. The value of time is assumed to be incalculable, more philosophical than practical. And it is, if you are stuck using “absolute” values.

Yet you can value time and derive implications for velocity if you are using finite difference methods. Thinking in relative terms enables you to think of relative profitability increases (in the moment) rather than always using an annual or quarterly budget yardstick determined much earlier, when you knew less and which is often likely to be out of date.

An invitation

I am just discovering the outer edges of this at the moment. Please let me know if you have any suggestions or feedback, and sign up below if you’d like to follow me as I share more on this topic.

Keep up with me by signing up over here for updates about my upcoming book on adaptive innovation.

<< Help Yo' Friends

Filed Under: velocity

  • « Previous Page
  • 1
  • 2
  • 3
  • 4

By Role

  • Startup Founder
  • Software Manager
  • Remote Leader
  • Innovation Executive
  • You Like?

    Search

    Key Topics

  • Faster time to market
  • Early-stage Growth and Marketing
  • Product-Message-Market Fit
  • Experiments and Minimum viable products
  • Metrics
  • About Luke

    Luke Szyrmer is an innovation and remote work expert. He’s the bestselling author of #1 bestseller Launch Tomorrow. He mentors early stage tech founders and innovators in established companies. Read More…

    Topics

    • agile
    • alignment
    • assumptions
    • case study
    • conversion rate
    • delay
    • Estimation
    • experiments
    • extreme product launch
    • find people
    • funding
    • Growth
    • inner game
    • innovation
    • landing page
    • landing page MVP
    • manage risks
    • marketing
    • metrics
    • minimum viable product
    • modelling
    • modularity
    • personal
    • Pitch
    • podcasts
    • priorities
    • proof
    • release planning
    • Risk
    • software
    • startup
    • stories
    • time management
    • tools for founders
    • uncategorized
    • unknown unknowns
    • velocity
    • vizualization

    Tags

    agile funding automated testing bottleneck case study conway's law covid customer development digital taylorism existential risk extreme product launch faster time to market growth headlines identifying needs landing page mvp launch lean lean startup managing priorities market risk minimum viable product modularity numbers options output paypal planning principles prioritization problem to solve process risk product market fit real options split testing startup story systemic test driven development testing time management tool underlier value hypothesis value proposition work time

    Copyright © 2021 · Log in · Privacy policy · Cookie policy