“There were only an estimated two to five thousand humans alive in Africa sixty thousand years ago. We were literally a species on the brink of extinction! And some scientists believe (from studies of carbon-dated cave art, archaeological sites, and human skeletons) that the group that crossed the Red Sea to begin the great migration was a mere one hundred fifty. Only the most innovative survived, carrying with them problem-solving traits that would eventually give rise to our incredible imagination and creativity.” –Marc Sisson
Imagination fed into our uniquely human ability to cooperate flexibly in large numbers. So fast forward to today. Our most valuable and exciting work, particularly in the context of innovation, still relies on our ability to imagine what needs to be done, start, and continuously course correct.
In this case, we’re using imagination to structure and agree how work needs to happen, and to map that to a subjective estimate of effort.
First, we imagine what needs to be built, why it needs to be built, and how it needs to work. Then, we subdivide the big overall vision into lots of little pieces, and divvy it up among a group of people who go execute on the vision. Before they do that, though, these people imagine, analyze, and discuss doing the work involved on each specific piece of the overall vision. They all need to agree how much effort it will take to complete that task. If there are differences of opinion, they should be ironed out up front.
If done successfully, this generate full buy-in and alignment from everyone involved. Even if the end product isn’t a physical thing, this approach works. The benefits of trusting people and harnessing all their energy and imagination far outweigh the inherent risks. It’s already done by tens of thousands of teams around the world already in various digital industries including software.
Relative Cognitive Effort is what we’re imagining.
The key number that is used for tracking this is a measure of how much “cognitive effort” was completed over a predetermined unit of time. Agile and scrum use the concept of a story instead of tasks, in order to help describe complex needs in a narrative form if needed. Usually this includes elements such as: user problem, what’s missing, acceptance criteria for the solution required. Therefore, the unit of measure for the cognitive effort expected to complete a story is called a story point.
Imagining size
Each story is sized in terms of story points. Story points themselves are quite abstract. They relate to the relative complexity of each item. If this task more complex than that one, then it should have more story points. Story points primarily refer to how difficult the team expects a specific task to be.
For example, it’s more precise to track the number of story points completed in the last 2 weeks, than just the raw number of stories completed. As stories can be of different sizes.
Now it’s time for a few disclaimers…
1. Story points are not measures of developer time required.
Cognitive complexity isn’t necessarily the same thing as how time consuming it will be to achieve. For example, a complex story may require a lot of thought to get right, but once you figure out how to do it, it can be a few minor changes in the codebase. Or it could be a relatively simple change that need to be done many times over, which in and of itself increases potential complexity and risk of side effects.
source: Josh Earle | Unsplash
The main purpose of story points is to help communicate–up front–how much effort a given task will require. To have meaning for the team, it should be generated by the team who will actually be doing the work. These estimates can then be used by non-technical decisionmakers to prioritize, order, and plan work accordingly. They can then take into account the amount of effort and trade it off with expected business value for each particular story.
2. Story points related to time are a lagging indicator for team performance.
The key, though, is that story points shouldn’t be derived as 1 story point = half day, so this item will be 3 story points because I expect it will take 1.5 days. This type of analysis can only be done after the fact, and related to entire timeboxes like a 2 week period. Instead, the team should be comparing the story they are estimating to other stories already estimated on the backlog:
About to start a greenfield project?
Have Launch Tomorrow run an in-house "riskiest assumption workshop". Remote delivery options also available. Discover where to prioritize your validation efforts, to get to market fast.
Do you think it will be bigger than story X123? Or smaller?
What about X124?
The team needs to get together regularly and estimate the relative size of each story, compared to every other story.
This generates a lot of discussion. It takes time. And therefore estimation itself has a very real cost. Some technical people view it as a distraction from “doing the work”. Rightly so.
Create an explainer video for your complicated new product.
Make sure your audience understands it, without being overwhelmed by technical details.
3. Story Points assume you fix all bugs & address problems as you discover them.
Only new functionality has a story point value associated. This means that you are incentivized to creating new functionality. While discovering and fixing problems takes up time, it doesn’t contribute to the final feature set upon release. Or the value a user will get from the product.
Anything that is a bug or a problem with existing code needs to be logged and addressed as soon as possible, ideally before any new functionality is started, to be certain that anything “done” (where the story points have been credited) is actually done. If you don’t do this, then you will have a lot of story points completed, but you won’t be able to release the product because of the amount of bugs you know about. What’s worse, bugfixing can drag on and on for months, if you delay this until the end. It’s highly unpredictable how long it will take a team to fix all bugs, as each bug can take a few minutes or a few weeks. If you fix bugs immediately, you have a much higher chance of fixing them quickly, as the work is still fresh in the team’s collective memory.
Fixing bugs as soon as they’re discovered is a pretty high bar in terms of team discipline. And a lot will depend on the organizational context where the work happens. Is it really ok to invest 40% more time to deliver stories with all unit testing embedded, and deliver less features that we’re more confident in? Or is the release date more important?
4. One team’s trash is another team’s treasure.
Finally, it’s worth noting that story points themselves will always be team-specific. In other words, a “3” in one team won’t necessarily be equal to a “3” in another team. Each team have their own relative strengths, levels of experience with different technologies, and levels of understanding how to approach a particular technical problem.
Moreover, there are lots of factors which can affect both estimates and comparability. It wouldn’t make sense to compare the story point estimates of a team working on an established legacy code base with a team who is building an initial prototype for a totally new product. Those have very different technical ramifications and “cognitive loads”.
Conversely, you can compare story points over time within one team, as it was the same team who provided the estimates. So you can reason about how long it took to deliver a 3 story point story now vs. six months ago–by the same team only.
Wait, can’t Story Point estimation be gamed?
As a system, story points gamify completing the work. Keen observers sarcastically claim they will just do a task to help the team “score a few points”.
But then again, that’s the idea behind the approach of measuring story points. To draw everyone’s attention to what matters the most: fully specifying, developing, and testing new features as an interdependent delivery team.
Moreover, all of this discussion focuses on capacity and allocation. The key measure of progress (in an agile context) is working software. Or new product features in a non-software context. Not story points completed. If you start to make goals using story points, for example for velocity, you introduce trade-offs usually around quality:
Should we make it faster or should we make it better?
Why not accumulate some Technical Debt to increase our Velocity?
Story points completed are only a proxy for completed features. They come in handy in scenarios where you don’t have a clear user interface to see a features in action. For example, on an infrastructure project with a lot of back-end services, you might not be able to demo much until you have the core
Example: Adding technical scope to an already tight schedule
On a client project, I had a really good architect propose and start a major restructuring of the code base. It was kicked off by his frustration with trying to introduce a small change. A fellow developer tried to add something that should have taken an hour or two, but it took a few days. The architect decided the structure of the project was at fault.
Yet, this refactoring started to go into a few weeks. The majority of the team were blocked on anything important. He was working on an important part of the final deliverable. While the work he was doing was necessary, it would have been good to figure out any elapsed time impact on the overall deliverable, so that it could be coordinated with everyone interested.
As the sprint ended, I proposed we define the work explicitly on the backlog, and estimate it as a team. This way, the architectural work would be a bit more “on the radar”. There were around nine tasks left. The team said the remaining work was comparable across all of them, and collectively decided it was about a 5 story point size per item. So we had added roughly 45 story points of effort.
Knowing that the team was averaging around 20 story points per elapsed week, it became clear we had suddenly added 2 weeks worth of work–without explicitly acknowledging what this might do to the final delivery date. While the architect was quite productive, and claimed he could do it faster, there was still an opportunity cost. He wasn’t working on something else that was important.
In this case, story points helped come up with a realistic impact to schedule that senior stakeholders and sponsors needed to know about. The impact on the initial launch date was material. So the estimation with story points helped provide an “elapsed time” estimate of an otherwise invisible change.
While not perfect, Story Points are primarily a tool for capacity planning, not whip cracking.
So to step back, you can see that story points are a useful abstraction which gets at the core of what everyone cares about: new product features. While subjective, for the same task–as long as it’s well defined–most of the members of a team usually come up with pretty close estimates. It’s kind of surprising at first, but eventually you get used it. And you look forward to differences, because that means there may be something that needs to be discussed or agreed first. That is the primary purpose of story points. As a side effect, it can help get to grips with a much larger backlog. And plan roughly how many teams need to be involved.
However, this approach only works within relatively strict parameters and disclaimers if you want the numbers to mean anything. It is at a level of resolution that proxies progress, but makes external micromanagement difficult. If you want the team to self-manage and self-organize, this is a feature not a bug of the whole approach. Ultimately the core goal is correctly functioning new features. Best not to lose sight of that.
In my last post, I explored the implication of a shift in importance and value of resources. Given increasingly shorter time frames for product life cycles, I think time is an increasingly undervalued resource. Zooming in to a sub-micro level, I think we’re also looking at a paradigm shift with resource allocation within high technology companies too.
Regardless of technology background, all stakeholders usually negotiate around schedule. Time is the least common denominator, from an accountability perspective.
In a traditional project approach, the team would figure out and agree the scope up front. These requirements would be fixed, once they are translated into cost and time estimates. Dates would also be agreed up front. In this case, there is a lot of analysis and scrambling up front, to try to learn and decide everything before knowing it all. In practice, this front-loaded exploration takes time. Regardless of whether the product delivery team is actually working on the product, this elapsed time on the “fuzzy” front end is added to the final delivery date. It takes a lot of time to define and estimate all of the work needed to deliver the scope. And in practice, this backlog will only help us figure out when the project or product is “done”, which in and of itself, has no meaning to clients or salespeople. It is easy to overlook this full-time cost of trying to fix and define all work up front, particularly since the people doing this work can usually get away with not “counting” this time as part of delivery.
standard approach: agile in a waterfall wrapper
And since scope is fixed, and something actually needs to be a pressure release valve, typically one of the bottom three triangles on the left suffer: time, quality, and cost. Then. spending months tracking project progress and with limited client interaction (because it’s not “done” yet) is yet another waste of elapsed time.
There is a way to significantly reduce this waste, by bringing in the client early and maximizing learning in a highly disciplined structure. In an Agile approach, the exact opposite approach is taken. We don’t try to fix scope up front; we fix the rules of engagement up front to allow both business and technical team members to prioritize scope as they go.
Instead, we strictly define business and technical criteria for a project up front, without fully agreeing what the scope is. So, we agree that we will spend up to $185k, quality is ensured with automated testing, and we have 3 months-to deliver something sellable. We may only deliver 1 feature, but if it’s a valuable feature then clients would pay for it. If all of these are unambiguous, then the product team itself can prioritize scope operationally based on what it learns from clients. For all types of products, ultimately the clients and the market will decide to buy whatever is being built.
start work sooner, ship more, and incorporate client feedback sooner
What’s fundamentally different here? Scope is defined by a series of operational or tactical decisions by the product team, not strategic ones defined externally to them. Senior business stakeholders shouldn’t need to follow and know the technical details of what’s in a product and what part of the project is “done”. It’s getting down into too much detail and communicating a lack of trust in judgement to a highly paid team of technical experts they meticulously recruit and train. It also undermines a sense of outcome ownership by the team. Because everything about their work is defined exogenously and just dropped on them.
What is the total cost of having a waterfall wrapper around agile teams?
Clearly efforts need to be coordinated across an organisation. Trying to use detailed waterfall-style up front planning will cost you elapsed time and may cost you the market opportunity you’ve identified. It’s better to have shared access to backlogs and agile’s drive to deliver potentially shippable software on a short cadence. Because you know you can use anything that is done by another team. And you can estimate or prioritize based on an open discussion among teams.
Last week, I pulled out the critical thing that Steve Jobs did, upon returning to an Apple computer with a sagging stock price. He went after a major systemic factor that was holding back release dates: too many priorities. If you are really going after top performance, you need to look at all factors, including the global ones.
When standing on the top of a hill, every way you look is down. It’s the definition of a maximum. The thing is, though, you might not be standing on the highest possible hill or mountain in the area. And it’s difficult to know that. You typically won’t see mount Everest unless if you are in the Himalayas.
maximizing velocity: every way you look is down
The same is true when you are maximizing velocity, and looking for the top of the parabola. There is a lot you can try to improve a team’s velocity. But it’s worth checking whether the team is being held back by systemic factors that you can’t see from where you are. Or is it really just a team-specific challenge.
It’s quite likely you don’t even see the potential global maximum, because of pre-existing company culture and procedures, particularly in a larger stable company. It’s worth trying to improve velocity locally, at the level of one team. But also keep in mind that you might be climbing the wrong hill. The higher one will be start with improving the context in which the team operates. A low velocity may simply be a reflection of a difficult culture and context in which the team operates.
local vs global maximums: where is your product development really?
Three Examples of Systemic Factors
Before we deep dive into improving team-specific output within each team, here are three examples of systemic factors I’ve seen at client sites that slow down team velocity:
1. Resource Thrashing
2. Too many chiefs
3. Communication overhead
Let’s start at the top, shall we?
1. Resource Thrashing
If your true priorities aren’t clear, this impacts you on many levels. Before getting to the somewhat obvious operational impact, consider the revenue impact first. Yes, revenue!
source: Booz, Allen, Hamilton, Harvard Business Review
According to “Stop Chasing Too Many Priorities” by Paul Leinwand and Cesare Mainardi in the Harvard Business Review, too many operational priorities correlates with a given company’s ability to outperform its peers in terms of revenue. Having too many “#1 priorities” will reduce revenue potential. It will be harder to communicate the vision to both customers and employees, as Jobs noted above. Also harder to execute.
About to start a greenfield project?
Have Launch Tomorrow run an in-house "riskiest assumption workshop". Remote delivery options also available. Discover where to prioritize your validation efforts, to get to market fast.
For example, one common trap is just to list everything you are doing as a priority. Yes, every cost and effort needs to have a goal and be justified. But this approach muddles what exactly needs to change. Because everything we are working on is a priority. It is easier to sell to existing employees and shareholders, but it doesn’t really lay the groundwork for anything to change.
Operationally, Steve Jobs focus on the 4 quadrants solved what I call the “denominator problem” from an operational standpoint:
Create an explainer video for your complicated new product.
Make sure your audience understands it, without being overwhelmed by technical details.
people per product = people/nr of products or projects
The large the number of “buckets” you need to fill, the higher the denominator. The higher the denominator, the less resources you have to succeed with each product. New or existing.
This is the true cost of lack of focus. Because if you are under-resourcing your product teams, you are effectively setting them up for failure and yourself up for disappointment as a decision-maker.
Signs of this would include:
Heavy reliance on traditional project management (waterfall): as there are lots of resource conflicts, you need a caste of professionals to manage this, each with their own specialty. Which adds more cost. Note that they don’t actually contribute to the work that needs to be done directly.
partial employee allocations: For example, allocating 5 people at 20% will mean that all 5 people will spend most of their time in status meetings and unable to actually deliver anything. But you have the slot fully allocated, right?
shifting team structures: If needs are constantly changing due to thrashing, then there is constant complexity around what needs to be done, who needs to do it, and by when. So you’re spending a lot of time figuring out how to achieve new goal with the limited resources you have and who you can nab from elsewhere.
2. Too many chiefs
For anyone who remembers high school physics, there was one important distinction between velocity and speed–as my friend Andy Wilkin recently pointed out. Velocity had an implied direction. One direction.
Velocity measures speed in a specific direction
If each manager pulls in their own direction, they collectively turn velocity into speed. A low speed. And effectively no clear direction.
There is a value in having specialized managers which look after shared company concerns. Ones which cut across multiple products. For example, a function like DevOps or a shared database infrastructure. But then you also have functional managers, like QA. And project or delivery managers who are on the hook for getting the team to ship. And geographic managers who keep a close tab on staff in remote offices, possibly because they can charge out the time. And you end up in a situation with a lot of managers who mean well, who want to stay informed, but they don’t really contribute to the work which needs to be done.
This is a systemic problem of unclear goals, covered up by having layers of people who are responsible for slices of what needs to be delivered. Here are a few metrics you can track:
cost of going in the wrong direction: Often a side effect of having too many stakeholders, you can end up building things which sound good to internal decisionmakers or committees, but which customers could care less about. In other words, the “voice of senior manager” booms louder than the voice of the customer. total cost = nr man months * monthly burn rate
cost of oversight crowds out budget for people actually doing the work: a side effect of having so many managers is that you don’t have enough money left over to hire competent people to do the work. Managers are expensive and, ultimately, they aren’t really required to accomplish the goal. Just to structure the work and monitor progress. Some of that is necessary, but ideally it’s done by people doing the work with a high level of trust and a minimum level of oversight.
ratio of stakeholders to team members at meetings: This originated as a snarky observation on my part. Doesn’t make it less true. For operational meetings, such as a development team standup, if most of the people attending say they are “just listening today”, that means they aren’t adding value and they are taking up others’ time. To be followed up by more meetings with other stakeholders and managers to discuss what is happening at the standup. In practice, there is a high financial cost to all of this, and this ratio is a good proxy for that cost, even if employees don’t know who makes how much.
3. Communication overhead
One of the older pearls of wisdom that have been floating around the software industry is: “9 mothers won’t be able to deliver one baby in 1 month.
Your ability to add people on to a project will typically be constrained by the structure of what you’re building. But also by sheer communication issues. Before anyone can contribute to a project, they need to have enough context to be ableto do so. That is harder than it seems. For one, they need to know how their bit contributes to the overall vision. So they have to understand enough of the big picture, to locate where their addition should go. For another, they need to know who knows what. Who to ask specific questions.
Much of this accumulates based on natural interaction patterns in groups of people. Each person will need to relate with every other, even the junior people to the senior people. So every important messages needs to go out to everyone. And also needs to be understood by everyone.
Common patterns to make this “work”:
hub and spoke pattern, where the manager is the source for all decisions and communication. Typically, this results in the manager’s time being a bottleneck, and most of the team sitting around and waiting for instructions. And overall progress is slow.
peer-to-peer pattern, or the self organizing one. This is much more effective and immediate, particularly if there are differences of experience, skills, or knowledge among team members. but this is often difficult to scale up. Each additional node added to a network of peers adds increasingly higher communication requirements. A network of this type has n! connections, so 25 peers in a flat network have 25! connections or 1.55 * 10^25 connections.
In practice these sheer numbers are combined with either inexperienced or offshored staffers or partial allocations of busy people who can’t be 100% spared. For the first, it’s clear why they struggle to contribute (at least without direct access to managers and mentors who are too overstretched to help). For the second, if you have a senior BA allocated at 20% to a project, most of that time will be attending status and planning meetings just so that she has context. So there is almost no time available to do any work.
So in short, for discovery intensive work like new product development, you are setting yourself up for difficulty if you put too many people on the product too.
Wait, so how do you get a globally optimal outcome if you can’t add either staffers or managers?
Beyond a certain point, it doesn’t make sense to add more. You will see it in your output metrics. And ideally that output goes in front of customers or prospects frequently (with commercial intent of course).
Small, highly experienced teams who know what they are doing. And with direct access to customers, to ensure they are building something customers care about. These people are probably on your team already. Get rid of everyone else, and give these team members the space to do their thing.
If you want to increase learning and speed, add more independent teams with the same characteristics. But don’t make one team too large in the hopes that the software factory will output more web screens and database tables per month. Because either it won’t or it will, and in both cases, you can end up disappointed.
Note that this is the exact opposite of what large, efficient and established companies are used to doing. Being deliberate, effective and thoughtful will leadyou down a much better path.
After a nasty battle with Apple shareholders, Steve Jobs, then the original founding CEO, was ousted. He went on to create NeXt computers (later acquired by Disney). In the meantime, Apple drifted as a company. It proliferated product lines. Lost focus. And the share price entered a death spiral phase. A few years later in 1997, he was recruited back to save the company from very poor public share price performance.
“Saint Steve” at Macworld 1998
When we got to the company a year ago, there were a lot of products. There were 15 product platforms and a zillion variants of each one. I couldn’t even figure this out myself. After about three weeks. I said, “how are we gonna explain this to others, when we don’t even know which products to recommend to our friends?”
In this keynote at 1998 Macworld, he announced early successes, such as a 3rd consecutive profitable quarter. In my opinion, one of the most powerful parts of his talk was the following grid:
The 4 apple quadrants, source: Apple
In effect, after a strategic review of all 15+ product lines, Jobs decided that these four were the only products worth focusing on. Consumer was aimed at consumers and education. Pro was aimed at publishing and design. Everything else was shut down. Here was his rationale:
As a matter of fact, if we only get four, we could put the A-Team on every single one of them. And if we only have four, we could turn them all every nine months instead of every 18 months. And if we only had, four we could be working on the next generation or two of each one, as we’re introducing the first generation. So that’s what we decided to do: to focus on four great products. And the first one that we introduced of course was the Desktop Pro product.
Notice that the main practical reason Steve Jobs cited for this change is the reduced product release time, or cycle time. Or low velocity in agile terms. He could release a lot of resources. Focus them just on these 4 great products. And get out of the bureaucratic quagmire that was holding the company back.
This was the kind of “zero-based thinking” decision that a hired gun CEO would be afraid to make, but a founding CEO could find the courage to do. To implement a company redesign and rethinking from first principles, rather than just tinkering around the edges. Steve Jobs knew what life before Apple was like, because he was there. And he already had a successful “go” at building the company. So it didn’t take much to identify that the company needed to be slimmed down and focussed in order to become stronger.
Of course, there was uproar amongst developers with vested interests in existing product lines. To a lesser extent, among clients of the cancelled products. But this marked the beginning of Apple’s long climb to a corporate icon and stock market darling.
The contrast between ideal and real is powerful. Makes you focus.
The promise
what actually happened
By not paying enough attention to the gap between the ideal and real, you may end up with a complete blowout like Fyre Festival founder McFarland. I heard of it only long after it happened by watching the Netflix documentary. Clearly I’m not the brightest in terms of social media usage.
The biggest takeaway from the Neflix documentary I had was around McFarland’s unwillingness to face the operational realities of what he was promising. Which ultimately led to his downfall. He kept pushing back on anyone who brought him bad news, telling them to “stop being negative” or just firing them. If he had taken in the feedback earlier, he and his team could have avoided the whole blowup.
I’ve found a simple tool to help straddle the vision with operational reality.
I first heard of it from a relatively older book called Lean Thinking by Womack and Jones. Basically, it should be possible to measure what the average output rate should be in order to meet customer (or executive sponsor) expectations. They call this takt time. The term originated in Germany in the 1930s, and was basically used as an output “pulse” benchmark.
The example they used was that of a bicycle factory. In short, “takt time synchronizes the rate of production to the rate of sales to customers”, according to Womack and Jones.
While initially bicycles were built in larger batches based on the orders coming in, this usually caused internal operational conflicts if multiple customers had expectations of delivery. Moreover, it usually took weeks to deliver anything. And usually the orders were late.
Instead, by figuring out that the average expected output rate was that of producing one bicycle every 10 minutes everything became much simpler. In effect, if the factory could guarantee a “pulse” of output at that rate, they would be certain to meet typical customer expectations. This required a significant change in the floor layout of the factory, upskilling employees to be more flexible, and change in production processes.
Here is a smaller scale example of the change in mindset of what needs to happen at the factory, in order to deliver to a takt time:
Same amount of time, different workflow, different results
But ultimately they invested enough time and effort to get to the required level. Which meant that their customers always got the right number of bikes on time. Which was a big change from before.
How does this work?
Let’s say you need to deliver 59 orders of 192 bikes on average in a year.
About to start a greenfield project?
Have Launch Tomorrow run an in-house "riskiest assumption workshop". Remote delivery options also available. Discover where to prioritize your validation efforts, to get to market fast.
So effectively to meet this level demand the factory needs to deliver 1 bike every 10 minutes. This is a pretty objective measure and one which is observed every 10 minutes, so there is a high frequency of potential observations. You’ll know pretty much in real-time whether that pace is being achieved or not, by looking at a well-placed stopwatch.
Obviously the aggregate volume of orders may increase or decrease over time, and takt time will need to be adjusted so that production is always synchronized with demand. The production slots created by the takt time calculation are clearly posted so everyone can see where production stands at every moment. This can be done with a simple whiteboard in the product team area at the final assembler, but will also involved electronic displays in the assembler firm and electronic transmission for display in the supplier and customer facilities as well. Womack and Jones in Lean Thinking
Once this number is calculated, you have a clear operational benchmark you need to hit. Most likely, you’ll need to change how you work, in order to make it possible to 100% complete one bicycle, if you are used to batching productions of approximately 200 units. But from a monitoring perspective, it becomes a powerful and objective way to know what is happening on the factory floor.
An even simpler example
Let’s say you’re running a hamburger stand for 4 hours a day.
What is the takt time for 50 customers? 75 customers?
Time available is 4 hours (240 minutes)
50 customers – takt time is 240 / 50 = 4.8 min
75 customers – takt time is 240 / 75 = 3.2 min
From a management perspective, you need to organize the process, system and people to deliver a hamburger within that time frame, if you want to be meeting demand as it comes in. In practice, these are averages and people might clump up and have a lot come at once, particularly if you are doing a good job. Having long queues can also be a marketing strategy. But from an operational standpoint, you have a pretty clear benchmark of how much time you have to service each customer.
The new technology case: B2B enterprise software example
In fact, with new technical products it’s relatively easy to know if your operational realities are misaligned with the same approach. Break down and contrast the expected velocity of a new product team with the rate at which they were going. (all assuming they’re building the right thing of course).
After significant deliberation for a few years, an executive team at a client finally had a view on what needs to built. There was a decent technical hypothesis for how it could work. On the market side, they decided that current clients are the best initial adopters, as they are likely to pay. But that meant the new product had to “hold water”. So we looked at de-risking the technical and delivery risk, since that was the biggest concern.
At this stage for the purposes of takt time, we were treating the project sponsors as one client, even though the product ultimately needed to be sold to a wider market. Together with around 10 experienced people, I was thrown in to figure out how to deliver on the big product vision by the given date. We started digging deeper into exactly what software needed to happen.
What actually needed to be built to fulfill the vision.
As frequently happens in these scenarios, the number of features to deliver on the vision was surprising. Or at least it even surprised me how much needs to be built before we got anywhere useful.
There was a gap between the wished for ideal and what was really possible:
Gap between the vision and reality
We broke down the product vision into stories, and started estimating roughly how much work this would be.
The key variables here are:
Story points: ye olde mesure of complexity and “cognitive load”
Elapsed Time: both for values you expect and how long you are taking so far (if you’ve already started)
Start and Release Dates: Start was obvious. Release date was given by the executive board.
Sizing the scope by numeric gut feel
At that point, we had hashed out all of the bits and pieces and important edge cases, that coming up with a range worked as an outcome.
Gut feel after a workshop is a good estimate, even if you don’t have all of the details handy. Without a workshop…sure I can come up with numbers and estimates, but probably not that useful.
Start Date in this case was the next business day after workshop.
Release date was given from upstairs.
In terms of story points, to avoid giving the impression that this is a precise value because it’s a number, I like to provide a range. The range consists of a pessimistic (higher) and optimistic (lower) value. If I feel confident, the range will be narrower. And the opposite.
Once you have the range, figure out the total number of minutes in between the start and end dates.
Finally calculate what the implied pace is per story point. In other words, for each story point, how many minutes do you need to finish one on average (assuming the planned release date is fixed). This is called takt time in lean circles.
Case study
Basic breakdown
In the above case, we’ve got an expected takt time range between 85-109 minutes per story points. This is just an expression of either market or senior executive expectations. This analysis gives you something to contrast with what you actually observe on a continuous basis.
Behind the curtain
Simple stuff right?
Well, we were already a few sprints in at the point I’d done this analysis. Turned out we were clocking in at a cycle time of 174 minutes per story point. On average.
This type of contrast is enough to trigger a discussion. Certainly that’s what it did at the time. Ultimately, it became clear that a lot had to change if we really wanted to hit the desired dates. Or that they were just plain unrealistic. Which is fine as an outcome up front. It’s better to know that earlier to at least have the option of digging in a bit more.
In this case, we needed to improve our velocity 2.2-2.8x given both scope, date, and team composition were fixed, as we were already signficantly beyond that initial start date. And assuming that complexity estimate range in story points was accurate.
Takeaways
Of course this opened many exploratory discussion threads–in terms of what to change in the company and workflow. And how to change them. Primarily, though, it defined the scope needed as an outcome of a larger delivery system. This system was clearly inefficient. More importantly, it wasn’t up to senior stakeholder expectations.
Since we were able to highlight this pretty early, it gave us a much better work environment. Accountability become palpable. Because we were open about the unrealistic expectations before they crippled us. And objectively communicated what our current resourcing and capabilities would mean to business outcomes.
What I liked most about takt time was that it gave a clear performance benchmark for the entire team to work towards. And for me to help them get there. It helped prevent miscommunication, blame, guilt, and lots of other nonsense that would happen without this kind of an objective measure. And having a healthy and productive workplace is an ideal worth pursing in my eyes.