Launch Tomorrow

Landing Pages for your Lean Startup

  • About
  • Members
  • Blog
  • Services

The Magic of Modularity

March 26, 2014 by LaunchTomorrow

If you’re thinking of creating a new product, you don’t want to wait a week. Especially in the early days, when you need to make up your mind about what the product is.

The fastest way to speed up testing? Delegate tests and run them in parallel. The financial value of low granularity tests which you can run quickly is immense. You want to benchmark product ideas against each other. You can complete hypothesis tests much faster.

When organizing your own workflow, modularity increases your ability to scale testing. Total elapsed time is much lower. You can do many tasks in parallel and then combined at the end. This includes testing activity.

You iterate out of that initial fuzzy zone, and into working on product that sells based on a clear promise. Your product doesn’t need to be finished. Instead, the product idea must be clear and easy to understand and attractive for your target audience.

For example, for years big banks and trading firms tried to create software for algorithmic trading. Traders used computers to execute simple strategies much faster than a person could. A computer can snatch and resell a product in milliseconds, if it identified a price difference across two markets.

The principle is the same as using an ebay sniper. If you could have computers
1. identify opportunities
2. patiently wait for the appropriate moment,
3. buy and sell at exactly the best time,
the computers would execute transactions on better terms. Computers’ response time was much lower resolution. Electrical impulses sent from eye, to brain, to finger took ages, relatively speaking. That speed made a big difference.

A few years ago, banks started moving away from big cloud arrays of central processing units (CPUs) with standard code. Instead, they started using hardware from high performance computing. Graphics hardware known to gamers, like Graphics Processing Units (GPUs) found new purpose.

The main difference between CPUs and GPUs? GPUs enforced the use of modularity and parallelism at an low level. Data would passed in. The whole chip could be used simultaneously to process data. There were no serial bottlenecks, which often arose because CPUs were general purpose.

The difference in performance was staggering. JP Morgan claimed that they had reduced the time required for overnight risk calculations down to one minute. With the same level of accuracy. While creating the FGPA arrays requires more effort up front, the benefit is clear.

Competitively, JP Morgan had much better information to act on. Calculations happen on an interval of one minute. Everyone else just “whips their horses” harder. At the same time their competitors only reconcile their overall risk once a day.

Low-level hardware modularity meant that JPM executed the same instructions much faster. It created the right environment. Geeks run hundreds of similar calculations in parallel, and then combining the results as one final step.

Academics have replicated these results in studies. Compared to using a standard CPU, the performance difference for simple arithmetic was 147x faster on modular hardware. It just happens the arithmetic calculates the price of a financial option. You pass in standardized data. Any part of the chip can run the calculation. Nothing changes as it all runs. At the end, the results are all summed up. Only then, the data in updated.

Imagine being able to do the same for your business or startup.

You have a bunch of ideas, but don’t know where to start. If you can figure out how to sort out the best ones quickly and test many ideas in parallel, you can get clarity very, very, very fast.

You can use same approach for individual benefits or even features. Hey, drill down as much as you want. The same logic applies. As the Beastie Boys say, “If ya got the skills, you pay the bills.” Easy.

If you want to find out more about how to do this, take a look at Launch Tomorrow

<< Help Yo' Friends

Filed Under: experiments, marketing, modularity

How does Automated Testing make anyone more money?

January 31, 2014 by LaunchTomorrow 1 Comment

When developers start talking about test-driven development and automated testing, most product managers get antsy. They want to invest lots of time, without actually creating any new features. Oodles of cash disappearing, with seemingly nothing to show for it. Sounds like a terrible business idea.

Or is it?

By getting rid of waste in your product development, there are three main business reasons why automated acceptance testing will make you more money. Not only does acceptance test driven development (ATDD) help reduce really important structural costs and risks, automated tests bestow an existing product with a lot of sales and marketing mojo. A decent automated test suite helps the developers execute really fast on new ideas. Magic.

Enforcing well thought out specifications

First of all, a test can be used to enforce a specification. These specifications exist to be sure that the functionality isn’t changing, once it’s understood. Conversely, missing tests highlight missing understanding at the initial stage.

Historically, specifications have been written by reams of business analysts (BAs), negotiated, and nailed down as the ideal solution to a particular problem worth solving.

The spec would be tossed over the wall. Developers then try to make sense of it all. They turn it into code. Then testers use those same specs to confirm (manually) that the specifications been hit. Sounds great in theory.

In practice, you play Chinese Whispers. What the client said isn’t quite what the BA’s heard, which isn’t quite what the developers heard, which then is not what the testers heard.

This is hard work, particularly if you aren’t really clear your big-picture business goals. If it’s not clear what the main goal was, then it’s very difficult to justify having the specification handed over in this way.

In contrast, if those initial specifications are turned into acceptance tests, you are forced to take apart this bigger vision into specific functions in the code. Once the developers write code which passes those tests, the tests are essentially enforcing the original scope as specified. By converting the specification into code, the original spec becomes immortal like the code itself. Acceptance tests are testing the code from the point of view of the client or end user, typically exercising code all the way through the system. Unlike unit tests, they aren’t testing individual methods or classes.

Whatever it is the customer wanted, automated acceptance tests confirm that that’s what they get. Each time the tests are run. There could be edge cases which were missed in the original specification, but if there wasn’t a test for it, then it wasn’t initially specified.

Alternatively, the acceptance tests can be modified or expanded, as you discover new requirements, but then that’s a very conscious decision. Chris Matts has previously pointed out that writing your specifications as automated acceptance tests increases the amount of communication on a team, which maximizes learning and the embedded real options in your development process. By minimizing the Chinese Whispers between BAs and the manual testers, you are much more likely to deliver what you expect, and everyone is clearer on what needs to be done.

When you could use acceptance test driven development, you get a high level of certainty in the moment. You’re delivering what you expect. If you manage to build out your acceptance test suite, you genuinely verify the relevant edge cases. It’s much more likely that you’ll capture side effects of changes you didn’t consider earlier.

In the end, you also deliver a much higher quality product.

Tests reduce the cost of release

Second of all, releasing software, particularly when regression tests take a long time, can be quite problematic. Not only is the release itself quite a headache, it costs some extra time. But that’s not the real problem.

The real problem lies with unreleased features. If you have a big backlog of unreleased features, it’s like having a pile of a pile of boots without soles in a shoe factory. You can’t sell it, yet you’ve already bought the raw materials and started the work. Those boots have near zero value, because they aren’t done. The same thing happens with unreleased features. Any feature which is dev-done but not released is like a sole-less boot.

Typically, the cost of release is a big reason why companies don’t release half-finished software. What’s the fastest way to reduce that cost? Automated acceptance tests, and an automated release process. While it will take a lot of time and aspirin for the related headaches, having this in place can put an entire software business on a completely different footing (You got sole, baby!).

Moreover, with a regression test suite, you can have much great certainty that none of the changes in a release candidate have broken pre-existing functionality. With automated unit and acceptance testing, you can completely eliminate the regression test component of a release. As soon as the manual testing is done for a given new feature, and/or any new test for that feature, you’re ready to release. Your cost of release goes down significantly in terms of time, developer time and everything. And your turn around time is much, much faster.

Lowered lead times

Third of all, this big business benefit ties the above together. Typically, customers expect to wait a long time after requesting new software or a new feature. In many cases, they may be willing to pay for it. Even if not, it’s clear that this particular request would have a lot of value in their eyes.

There’s an easy way to measure and quantify this. Lead time is the amount of time between a customer’s request and when that customer actually gets what he wants. While a lot of effort goes into producing this, ultimately this is all the customer really cares about. They want to know what’s in it for them.

Yet, it’s inevitable that your lead time will grow as your code base gets larger. The more parts you have, the more interconnections you have. This grows exponentially, unless each part has it’s own set of tests.

Tests keep your lead time linear. They’re an insurance policy. If customers do request a particular feature, with relatively high level of confidence, you can give them what they want without breaking anything else. That new feature is contained behind a test suite, so you don’t need to worry your pretty little face about that.

Imagine a big company being as nimble as a startup. Most of the real life examples have famously large test suites. In addition to massive test suites, Google even wrote and open-sourced their own testing framework.

If you can minimize lead time, that tends to put a lot of upward pressure on revenues (that’s a good thing). At this point, we aren’t just talking about containing costs, we’re talking about earning more revenue. Generating new business. To be blunt, this is what all of your costs are subtracted from when calculating profits.

The investment that pays for itself

As with any business decision, deciding to build an automated test suite is an investment. Depending on the details, it will take your geeks some time to build this automated test thingy. There is a lot more to automated testing than well-intentioned “software craftsmanship”. At the same time, you can be sure to reap significant bottom line benefits in an existing software product business.

Even if users are clamoring for new features, dedicating some time to creating automated test suites will eventually make your product more profitable. The trick is to focus your testing efforts on areas where you expect to get high pay-offs. Prioritize these efforts rigorously, to make sure that this test suite pays for itself each time you release.

<< Help Yo' Friends

Filed Under: manage risks, modularity, software Tagged With: automated testing, test driven development

Product Customer Fit Only Requires Must-Have Features

November 30, 2013 by LaunchTomorrow

When releasing a new product, the first step is to find product customer fit quickly. The minimum viable product  (MVP) encompasses the essence of the Lean Startup ethos. An MVP helps go through one cycle of the Build-Measure-Learn loop. Lean Startup author Eric Reis warns “Customers don’t care how long something takes to build. They only care if it serves their needs.”

<whistling wind>

You also need to choose a specific customer, in order to address a customer’s specific need. The main goal of an MVP is to learn about the customer and the market. You want to validate or reject your hypotheses. Once you know what your market wants, you are in a completely different position.

fit.kaptain.kobold

fit.kaptain.kobold

In order to do this, you need to:
1. decide on an ideal client
2. get access to such a client through marketing activity
3. identify their most pressing need or problem
4. get paid to solve it

At an early stage, you can solve the problem in any number of ways. It’s often easiest to provide a service. That way, you learn more about your clients. You gather useful information about how to deliver a solution, which can then be automated via software or other type of product.

Let’s say you want to build a software company helping people learn foreign languages. Entrepreneur Derek Sivers points out that you can get started by just scheduling a language teaching session. It’s very manual. It’s not automated at all.

At the same time, it’s an extremely high-bandwidth way to learn about your customers’ needs. Most importantly, it’s useful for them. Once you have some experience delivering this type of service, you have a much better chance of successfully prototyping a solution which addresses that customer’s need.

You identify your customer’s primary need. You learn what the customer thinks about it, how they dream they could overcome the problem. You hear them vent about their frustration. You dig deep into specific aspects. You seek out find you can address.

You find out how your customer thinks about the problem. This is direct marketing gold. It helps you identify where to focus your efforts, so that you address what your customer finds the most vexing.

By focusing on the must-have features only, you release a product or a service that addresses that particular need. It’s rudimentary. Yet it works. You might not even require a line of code to prove your concept. Must-have features are essentially all related to specific changes you want to induce. Your target customer will not consider the product valuable otherwise.

In order to be successful, you really need to dig deep into one particular problem. You want to know the logical reasons why it’s attractive for your customer. You want to know why would benefits from the product. You even want to find out more about any emotional benefits they might get from the product.

This approach correspond’s to Ken Schwaber’s scrum value burndown charts. Develop the highest value features first. If the product ends up being successful, then in fact, these are the extremely valuable core product features. They are must have features for this particular problem. Without them, you can’t claim you have a workable solution to that specific problem. Each one potentially generates massive incremental value in users’ eyes.

Often a few features, if packaged together well, which work reliably, is enough to interest the early adopters in a market. While they realize they may not be getting a complete solution to the problem, they like being first, having access to the inventors, and contributing feedback.

These must have features define the product. Often, just a handful of features suffice for your product type to be attractive, such as copy and paste in word processors, compared to the typewriter alternative. That’s the kind of gap you want to find.

If your minimum viable product is not attractive to your target segment, move on. Next. Another niche. Another need. Another customer type. The faster you discard the unattractive options, the faster you’ll achieve product-customer fit.

<< Help Yo' Friends

Filed Under: agile, experiments, marketing, modularity, software, startup Tagged With: faster time to market, identifying needs, minimum viable product, product market fit

The Power of the Right Process

November 25, 2013 by LaunchTomorrow

In this video, you will find out

  • How it’s possible to build a 100 mpg car in 3 months
  • How iterative product development with short cycles is much more powerful
  • Why agile, lean, and scrum work outside of software development

When developing software, nailing down the right process is a hard problem to solve. In order to help you out, I’ve recently published an article over at InfoQ on process debt. Enjoy!

<< Help Yo' Friends

Filed Under: agile, experiments, modularity

Dependency Inversion in the Business Of Software

October 3, 2013 by LaunchTomorrow

Assumptions can be expensive, particularly if they lead you to the wrong conclusions. One popular assumption is particular particularly expensive: dependencies are minimal, or inconsequential. Interestingly enough, the business world is only starting to discover what we have known for decades as software developers. Loosely coupled systems, whether in code or in business units, are much more effective than tightly coupled ones. Depedency inversion superpowers your organization to distribute work effectively.

John Hagel, the ex-McKinsey technology consultant, summarizes loose coupling for the business folk:

“Loosely coupled is an attribute of systems, referring to an approach to designing interfaces across modules to reduce the interdependencies across modules or components – in particular, reducing the risk that changes within one module will create unanticipated changes within other modules. This approach specifically seeks to increase flexibility in adding modules, replacing modules and changing operations within individual modules.”

He claims that loose coupling will reshape the business world over the coming decades.

What it means for businesses

The advantages of loose coupling, whether within technology or software companies, are multi-fold.

  • enables scaling, as it doesn’t require detailed specification and monitoring
  • can accommodate a larger number of specialized participants easily
  • allows individual parts to retain their integrity/autonomy
  • flexibility: quickly move in specialized operators based on customer needs (pull not push)
  • makes it easier to experiment strategically, in order to learn faster
  • create custom combinations of components, i.e. personalization, to maximize customer value

To realize all of these benefits, you need to be really careful about how you implement loose coupling in business systems. The key is having well designed interfaces, i.e. decision rules that enable the people doing the work to make tradeoffs without requiring signoff from someone higher up. Everyone above them resolves and paradoxes and inconsistencies, with each level of management working on inconsistencies at a higher level of abstraction.

IBM & the computer industry

In our industry, loose coupling already has reshaped our world-many times over. Going back as far as the 1960s, IBM 360 operating system in the 1960, it was clear that having a loosely coupled technology spawned a loosely coupled ecosystem of 3rd party component developers. IBM then achieved dominance over the PC market with a loosely coupled system of hardware and software.

Unlike its primary competitor in the PC space, Apple’s Macintosh tried to make everything proprietary, so that they owned everything. This shut down feedback loops and made them a closed moat company.

Building on IBM’s success, Microsoft has become a major player-at least financially, because their software had clearly defined interfaces, and because they welcomed cooperating with developers, while charging users where it was appropriate.

Android as another example

More recently, Google’s Android system has slowly become more popular than Apple’s iOS, quite possibly because it was more open and loosely coupled.

Even though this openness has caused problems with reliability, viruses and enforcement of standards across specific hardware configurations, users prefer the benefits of using a loosely coupled product. Note that most users have no idea what the term “loose coupling” means.

Mergers introduce dependencies and complexity

What if you don’t want to grow a new product line, but just acquire it? Do you know how many mergers are typically financially successful?  According to Marina Apaydin, “Double-digit growth of merger and acquisition (M&A) deals in the 1980’s and 1990’s had fuelled extensive research, which produced a surprising result coined as ‘the success paradox’: that M&A popularity persisted in spite of the overwhelming rates of failure (60%-90% depending on the industry and the time period).”

Having modelled takeovers in Excel in a previous life as a financial analyst, I know how easy it is to assume that there will be “cost synergies”, without being aware of the operational implications of those synergies. Of course, there are many factors required to pull off a merger successfully, but making sure both sides of the company stay operational are pretty important.  And yet, the simple fact of combining or taking apart components of companies assumes they are completely modular, self-contained, and loosely coupled.

Are they? Really?

Key takeaways

This ideal you achieve is truly a self-managing, learning, and proactive organization at all levels. Within a broader business context, you get a lot of flexibility. You get the ability to change on a dime.

<< Help Yo' Friends

Filed Under: marketing, modularity

By Role

  • Startup Founder
  • Software Manager
  • Remote Leader
  • Innovation Executive
  • You Like?

    Search

    Key Topics

  • Faster time to market
  • Early-stage Growth and Marketing
  • Product-Message-Market Fit
  • Experiments and Minimum viable products
  • Metrics
  • About Luke

    Luke Szyrmer is an innovation and remote work expert. He’s the bestselling author of #1 bestseller Launch Tomorrow. He mentors early stage tech founders and innovators in established companies. Read More…

    Topics

    • agile
    • alignment
    • assumptions
    • case study
    • communication
    • conversion rate
    • delay
    • Estimation
    • experiments
    • extreme product launch
    • find people
    • funding
    • Growth
    • inner game
    • innovation
    • landing page
    • landing page MVP
    • manage risks
    • marketing
    • metrics
    • minimum viable product
    • modelling
    • modularity
    • personal
    • Pitch
    • podcasts
    • priorities
    • proof
    • release planning
    • Risk
    • software
    • startup
    • stories
    • time management
    • tools for founders
    • uncategorized
    • unknown unknowns
    • velocity
    • vizualization

    Tags

    agile funding automated testing bottleneck case study conway's law covid customer development digital taylorism existential risk extreme product launch faster time to market growth headlines identifying needs landing page mvp launch lean lean startup managing priorities market risk minimum viable product modularity numbers options output paypal planning principles prioritization problem to solve process risk product market fit real options split testing startup story systemic test driven development testing time management tool underlier value hypothesis value proposition work time

    Copyright © 2022 · Log in · Privacy policy · Cookie policy · Terms & conditions