At one client site, the CTO from the head office wanted to introduce metrics to help him monitor what was going on across all of the new product initiatives. Like the other delivery managers at the time, I understood his need. But when I opened up the first spreadsheet defining the template I needed to report to–my jaw dropped.
It seemed like most of the numbers were about compliance to the centralized directives, particularly with respect to timeliness. With very little focus on whether or not what we were doing will actually make the company any money. If we were successful at hitting those timelines, we wouldn’t necessarily make anything until those delivery dates were hit.
Months and years away.
Not everyone likes numbers. I get that. Once you grow a program beyond just one delivery team, though, the firehose of information coming at you becomes overwhelming. So it’s perfectly natural to start looking to numbers to help figure out what needs attention.
As a leader, just like this CTO, you have a very simple concern: anything that goes wrong will land in your lap. So the more proactive you are, the fewer crises you deal with. And your calendar stays in your hands.
The flip side, though, is if you try to manage this from a “control” and “risk avoidance” mindset, you’ll stifle the ability for your team to respond and learn. Even more so if you’re only focusing on schedule risks. If your schedule is at risk, it’s likely you have too much scope and not enough client interaction to drive product development.
So the below insights are the most useful that I have read or have discovered myself when choosing or constructing operating metrics around new product development.
1. Information and data is everywhere. Any given data point has low value. In contrast, the ability to contextualize data has enormous value.
The amount of potential data doubles expands exponentially, making it increasingly more valuable to find signal in all that noise. In other words, interpreting data requires structure. That structure requires context. And interpreting context requires wisdom.
- Just because there is a lot of data, doesn’t mean you can pull up a meaningful table using exactly the summary numbers you need.
- And even if you have that table, you need to be able to interpret it subjectively.
- Not every number exists to be added, aggregated, maximized or minimized numerically and you need to know the real-world meaning of every number.
- And even if you do know this real-world meaning, being able use discernment and wisdom to interpret it usefully.
For example, I had this problem before I had taken accounting classes during my Masters in Finance degree. At the time, I was poring over financial statements of hundreds of dotcom Internet companies. The truth was there were lots of numbers to look at, but I was kind of guessing what they meant at the time. I could focus on just net profits and work back from there. Comparing lots of net profit numbers across companies across time would be a form of structured data as a table. Maybe useful. It kind of depended what was behind the number. It also varied significantly, as the real world meaning was quite varied across the hundreds of “dotcom”. And finally, at the time I didn’t have enough wisdom and discernment to know that actually looking at net profts for early stage companies was just premature. What actually mattered was learning and operational progress.
For new products, the key context is whether you are learning something useful as that will shorten your path to profitability (even if that learning process is initially a net loss at first in pure cash flow terms). In my opinion nowadays, “will the project deliverables arrive on time?” is the wrong question to be asking. They have to arrive (or at least be defined) early enough for a prospect to want to pay for it. Yes, dates are important, just let’s keep them in the right context.
This goes much deeper. My friend Perry Marshall has said that the #1 success skill in the 21th century (one that almost nobody talks about) is discernment. Identifying the “vital few” factors that actually matter. The ability to discern is based on wisdom. Applying timeless principles to your specific situation. Building to last, not just hitting next quarter earnings targets.
2. The high bar for metrics: what decision would this metric help me make? And under what circumstances?
I interned in IT at GMAC mortgage, General Motors’ home mortgage subsidiary, while still in college in the mid-1990s. My boss at the time had me look into setting up the first ever company webpage and intranet. I had a late night discussion with my boss at the time about whether or not we should have a “hit counter” displayed directly on the site. This was a pre-built component displayed directly on the page, which went up by one each time somebody browsed to it.
The higher the hit counter was, the more popular that page or the company’s website looked. But in and of itself, having a higher number of hits didn’t mean anything. Certainly not to us. It wasn’t a number that helped make a decision or indicate a change of tactics was needed. What mattered was who was coming to the page, why, and whether they were achieving what they wanted.
This nugget of wisdom came from Avinash Kaushik’s book Web Analytics 2.0. The example he used was website hits. Yet his point goes much deeper than just interpreting website traffic. If you are tracking a operating metric that doesn’t help you or your organisation make a decision of some kind, then it’s just not useful.
There’s lots of numbers you could be tracking. Just log into your company’s various systems to get a taste: sales pipelines, ticket tracking, financial metrics. But remember you will get the most value by identifying the vital few that actually matter most to your company. Which by definition are specific to your company. They will require some deeper discussion to get right. And effort at aligning everyone in the company around them.
3. The GPA effect…
In the mad rush to the real world, most of my fellow students competed for the most prestigious jobs based almost solely on their grade point average (GPA). The conventional wisdom applied by investment banks and consulting firms was that a high GPA meant that you were smart and hard working. You stayed up and studied, so that you earned good grades, and therefore you’d be reliable.
Ultimately, for the most popular gigs, GPA became a first sorting mechanism. As recent grads, we were a commodity on the job market. If hundreds of students applied for the same investment banking analyst position, then the banks only interviewed the people with the highest GPAs.
In practice, though, this was a classic example of a heuristic which had a lot of potential faults:
- Did someone with a 3.94 GPA show more promise than a 3.92 GPA in a role that largely involved the ability crunch spreadsheets for 50 hours straight?
- What about relative strengths and preferences? Would someone gregarious and social with a slightly lower GPA really be a worse salesperson at a bank?
- Didn’t this skew even further towards people already coming from privileged and wealthy backgrounds who had benefited from and could continue to afford nearly infinite tutoring? Didn’t this reduce the diversity of new hires?
- Wouldn’t this skew towards people with a perfectionist streak? And reward them for high compliance? Always looking for the “right” answer in an academic context doesn’t translate that well in all business context (such as new product development for example).
In contrast, there were guys on the crew team which clearly only had crew and beer on their mind. So a low GPA was just a lagging indicator of their relative lack of maturity more than anything else. And in this case, it was probably a useful metric for hiring purposes, particularly as a way to identify bad choices.
So GPA was useful as a metric to help prevent a complete blowup in a hiring scenario, but downright dangerous as the only data point that matters.
I suspect the same is true of many operating metrics in a larger business. Use them to highlight problems or potential problems, but beware of optimizing for them to the exclusion of all else.
For example: optimizing for velocity at all costs. You can cut corners with quality. You can run your team into the ground. You can create lots of badly documented features. But you went fast, right?
Sadly, the most common reason for new product failure isn’t missing a timing window. It’s building something the market doesn’t want or care about. In which case, pursing high velocity for its own sake is a fool’s errand.
That said, having very low velocity isn’t good either. It means it’s time to go digging and figuring out why the teams are struggling. And something needs to change or to be fixed.
4. When there is still too much data, give the most weight to data around your first order priorities.
If there is too much data, revisit your goals and priorities to figure out which metrics to pay attention to. Quite often operating metrics are unclear or not useful, simply because the stated goals are too nebulous or inaccurate to be useful. Or each stakeholder has their own goal(s) to achieve and therefore it’s difficult to agree on a common set of metrics.