How to create an actionable client profile August 28, 2016 by LaunchTomorrow 5 Comments August 28, 2016 by LaunchTomorrow 5 Comments Daniel Day Lewis, method acting maestro He’s the first man to ever win three Oscars. Daniel Day Lewis, that is. For the entire filming of My Left Foot, he didn’t leave his wheelchair, sound coherent, or even feed himself. For Last of the Mohicans he became a survivalist. He lived off the land. For In the Name of the Father, he lived in a prison cell. He starved himself. He asked the cast to insult and abuse him. When playing Abraham Lincoln, he even signed his texts “A.” Daniel’s style of acting, called method acting, expects him to become his character. To live in their skin. Which is a potent skill to have as a marketer. Why? [Read more…] << Help Yo' Friends Dropbox explainer video? You’re missing most of their Lean Startup story… March 21, 2016 by LaunchTomorrow 1 Comment Not launching can be painful, but not learning can be fatal – Drew Houstan, Founding CEO of Dropbox “Not launching can be painful, but not learning can be fatal.” That’s how Drew Houston, the founding CEO of Dropbox, describes his team’s approach. Dropbox, if you haven’t heard of them, does file-sharing in the cloud. And they’re famous for the Dropbox explainer video which got them lots of eyeballs on Hacker News, as the basis for explosive growth later. They give users access to the same files on all devices, regardless of where they are used. After interviewing their geeky friends in MIT dorm rooms, the Dropbox founders wanted “write once, read anywhere” file management to apply to all files. Not just a software developer’s source code. They started working on developing a working prototype to solve their own problem. When Dropbox launched, a number of cloud storage competitors with deep pockets already existed: Google Drive, iCloud, AWS, Carbonite, to name a few. To get their product off the ground, Dropbox had to be different. And simple to understand. Hypothesis: Latent demand for product concept X makes product development worthwhile Test type: Value Hypothesis Success Criteria: Able to get signups based on description Simplicity captures interest Traffic Source: social media and online communities If we build it, how will we acquire traffic?Who exactly will be interested?Are they interested enough to establish a relationship with us?What kind of conversion rates do we get? Result: Pass. The team was able to acquire traffic that converted based on a description on a landing page. At the same time, they were building prototypes to assess the technical viability of the product idea. As this initial market test proved that some pent up demand existed, the team dug further. Step Back: This test established baselines, which could then be used to explore the product presentation further. Also, it helped them reach out and establish contact with their market, independently of their immediate personal network, thus providing a slightly more unbiased signal. Also, using such a page potentially allowed them to test a path to market, to locate their most rabid fans. Hypothesis: Variation X meaningfully positions the product against customer alternatives As part of their application to YCombinator, Dropbox really wanted to get Paul Graham’s attention. So they created a video aimed at attracting early adopters. The goal was to explain the product concept as a story. The core differentiator (big idea as they call it) was “write once, read anywhere”. Make changes to any copy, and all copies are updated with the same changes. Test Type: Value or Growth hypothesis Success Criteria: signups > X or conversion rate > X% (established previously) — more accurately in Dropbox’s case it was to get accepted into the YCombinator accelerator program. Create an explainer video for your complicated new product. Make sure your audience understands it, without being overwhelmed by technical details. Contact Us or call us now at: US/Canada: +1 202 949 4478 UK: +44 773 952 7708 EU: +48 692 870 297 Traffic Source: HackerNews, link to landing page with demo video targeted at early adopters Result: Pass. Built list of around 5,000 interested prospects. The problem description resonated strongly with their targeted early adopter audience. Step Back: Moving to a video format helped get across the product’s story more effectively. This made it more accessible to everyone. In and of itself, an explainer video is just a change in format–similar to how a bestselling book is treated as a candidate to become a blockbuster movie. Ultimately, a landing page that tests demand much get across the story well. The medium for telling that story is secondary to the relevance and quality of that story to the target audience. As Dropbox’s initial target audience was quite technical, this explanation mapped to a number of tools and behaviors they already knew from software development. And they were more than happy to run with the write once, read anywhere concept. Arguably, this niche perceived other cloud storage as defective because it didn’t support this kind of functionality. As a result, they needed little convincing and persuasion to hop on board. Hypothesis: Paid search can be a profitable engine of growth for Dropbox Test Type: Growth hypothesis Success Criteria: Customer Value > Customer Acquisition Cost Traffic Source: Paid search engine marketing Dropbox experimented with using paid acquisition on a landing page. This is not an early-stage landing page MVP test. It’s an attempt to figure out what will grow the company, not whether the product idea is attractive. They hired an experienced search engine marketer, who went out and made landing pages. On those pages, they hid the free option, replacing it with a free time-limited trial. AdWords interface is showing incorrect campaign conversation numbers. Difference of numbers are computed here. [images:Dropbox] Total ad spend: approximately $3,000 in the image above In their words, here were the problems they faced: The most obvious keywords were expensive.Long-tail had low volume.Hiding the free option was shady, confusing and buggy.The conversion numbers on Google’s dashboard were inaccurate. Result: Fail. As their cost of acquisition at the time was at least $233 for a $99 product, the experiment to test paid acquisition as a profitable traffic source failed. Based on the economics of paid search, pay-per-click didn’t look like a viable growth strategy for Dropbox. Step back: Even though PPC as a source of traffic didn’t work for them, it solidified their confidence in their ability to retain customers. If people bought, their subscription retention rate was over 75%. In short, they had a great product their community loved, and they had product market fit. In Drew Houston’s words, this meant that “product-market fit cures the many sins of management.” After this idea failed, Dropbox created a famous explainer video which went viral, thus proving that a viral engine of growth was better for them. Hypothesis: A simple value proposition resonates more with the target audience than a complex one Test Type: Value hypothesis Success Criteria: Conversion rate > initial conversion rate Traffic Source: Paid search engine marketing Simpler the better! Result: Pass. Simple and concise converts better, as does having a clear call-to-action for the next step. Step back: This test type is taken out of the traditional toolbox of conversion rate optimization (CRO). The idea is clear to the founders. They want to communicate it as concisely and effectively as possible to their target audience. Even if they move to a different traffic source later (as Dropbox did), a clear and powerful value proposition ensures a high conversion rate for all further marketing efforts, including free traffic sources. Moving too early into this kind of testing can be a type of premature optimization. The Takeaways The key tactic Dropbox used was to to test both market and technical viability simultaneously. In addition, they did a number of smaller tests. Each test checked a much smaller piece of the bigger puzzle. This required them to break down the overall vision into discrete tests which they built and ran. Build -> Measure -> Learn By running a series of experiments, Dropbox stayed with the ethos of MVP=experiment. Each cycle around the Build, Measure, Learn loop gave them greater insight. Each step they took tested something new about their target market and their product. As a result the product evolved very quickly, because the team gathered actionable yet counter-intuitive data. This helped them build a strong USP (unique selling proposition) in a crowded marketplace using technology that was theoretically possible but unproven. There is a lot more to a minimum viable product than just a beta software release. This post aims to make that clearer. << Help Yo' Friends Empty pocket testing by Buffer March 2, 2016 by LaunchTomorrow Leave a Comment March 2, 2016 by LaunchTomorrow Leave a Comment Tweet or Posts on a social media at a pre-defined schedule The following are a number of Lean Startup validation case studies. Some will already be well known; some will be completely unknown. A lot of landing page testing has happened since The Lean Startup was being pieced together by Eric Reis. These are retrospective reconstructions of what happened using landing pages as vehicles for minimum viable products. For example, Buffer did empty pocket testing with a landing page before building their product. Just for your context, Buffer is a social media sharing tool, allowing you to publish tweets or social media posts on a pre-defined schedule. While you may have heard of some of the lean landing page case studies before, there is a lot of nuance in exactly what each test actually tested. They are typically not “traditional” A/B split tests, where they were testing whether a new variation of an ad or landing page beat the old one. In order to help make it more explicit, I’ve tried reformatting the experiments to be lightweight. Lean Startup experiments are generally not about testing the landing page or the product, but the business ideas they represent. Hypothesis: The target audience wants this product Test Type: Value hypothesis, confirm the problem exists and people want a “hands off” way to tweet Success Criteria: Emails gathered > 0 Traffic Source: Social media, word of mouth [image:Buffer] Result: Pass. A few people used it to give founder Joel Gascoigne their email. He used these to get some useful feedback and initiate a conversation with prospects. Step back: Potential users had left their email address at a random web page promising them help with this particular problem. This meant the idea itself was valuable, and there was potentially unmet demand for this particular idea. I would be careful to use only # of emails gathered as the primary metric in all cases. For a consumer facing product, this is probably good enough, assuming you have enough traffic. It would be better to also include some kind of a target number of sessions, to make sure that you have enough “attempts to convert” to make your metric meaningful. Hypothesis: The target audience is willing to pay for the product Test Type: Value hypothesis, confirm declared willingness to pay for “a way to automate their tweeting” Success Criteria: People would click-through the additional pricing page, and still leave their email. Traffic Source: Social media, word of mouth Pricing packages of this social media tool [image:Buffer] Result: Pass. People were still clicking through this additional step. Joel was able to gather useful information about the suggested pricing plans, in order to figure out pricing. Step back: Potential users weren’t put off by the blatant pitch, and still kept leaving an email address at the far end. What Joel hadn’t tested was whether people would actually buy; however, he was able to complete a functional prototype within seven weeks, and tested this hypothesis with a functional system. He actually got his first paying customer 4 days after the “rough-around-the-edges” product launch. I just wanted to thank Joel for contributing this fantastic case study to the Lean Startup community. It’s quite a well known one. As a result, I really wanted to cover it as an example of a line of thinking that’s worth following. Empty Pocket Testing At the core, the Buffer landing page MVP test was meant to address a major question for founders: will “they” buy it, if I create it? Before getting caught up in theoretical debates about what is and isn’t an MVP, Buffer just did an experiment. It just happened to be using a landing page to address a major risk factor for a new product business. In this particular case, checking for whether early adopters had the budget and were willing to spend it de-risked spending more time and money on the solution significantly. Even though they were asking theoretically, this helped to validate their sense that their target early adopters would be willing to pay. People are willing to pay out of their pockets for this tool. [image:Dan Moyle] If you’d like to see a number of case studies like the above, grab Launch Tomorrow. I’m updating it in an upcoming version with a lot of in-depth experiments that have been run. << Help Yo' Friends Lean Startup 101 January 21, 2016 by LaunchTomorrow Leave a Comment January 21, 2016 by LaunchTomorrow Leave a Comment Focused learning is the fastest way to validate a product idea. Lean Startup is based on the scientific process, albeit more business focused, focused on learning fast when you know little. It’s the fastest known way to validate a product idea. Consider this a Lean Startup 101 introduction to how to use it in your business. The first step of choosing a hypothesis to test is to map out your business with canvases. Business Model Canvas for the big picture, and any of the following three to figure out problem-solution fit: Lean Canvas for the product Javelin Validation Board Value Proposition Canvas After you’ve settled on a vision you’re happy to start testing, you prioritize what will give you the strongest boost in confidence. You try to get data confirming it’s true. Let’s say you’ve now settled on a specific assumption you’d like to test. Which means you need to map a fuzzy concept down to one metric. Choose a Metric Choosing this one metric is the next step (and one which is largely driven by intuition). Which number represents what you want to change? Or captures what you want to monitor? This requires some analysis, but also some subjective skill. For example, do you want to increase sales or profits? There aren’t really clear answers, because it depends on where you are and what you want to achieve. A venture funded startup interested in growth at all costs is happy to be unprofitable, as long their growth is up and to the right. A bootstrapper will be monitoring cash flows and profits like a hawk. If you are just exploring an idea and a new market, gather data to confirm that the external environment matches your assumptions. For example, one startup I worked with was testing out an idea about lending and borrowing DIY tools to strangers. They wanted to build a peer-to-peer lending platform for these tools. They found that consumers were much happier to lend out tools to strangers, than to ask to borrow. Once you have problem solution fit, focus on metrics that you can influence with your actions. Choose one that’s actionable. Dave McClure’s pirate metrics are useful here as a high level starting point: Acquisition, Activation, Retention, Revenue, Referral. Choose a success signpost For this particular metric, figure out what means “success” to you–before you run the experiment. This is a common point where founders trip up. What’s “meaningful”? Conversion rate hurdles are a good example. If I choose a conversion rate that’s too high, and the experiment doesn’t pass, I’ll have to give up on my idea. If I set it too low, I’ll be building a business around a bad idea and end up with a zombie product. In this case, a meaningful conversion rate is one which gives you a customer acquisition cost which is less than the expected long term value of a customer. Here’s an analogous situation I’ve seen offline. One team I worked with were gung-ho about an idea for a fitness app. They wanted to get 75% of the people they speak with confirm that the problem they wanted to solve exists. After speaking with 20 people, it turned out 30% confirmed that they had this problem. In this particular case, they’d been too aggressive about setting the success metric, as they would still be able to build a good business around 30% of a consumer market. Choosing a success metric is also very much an art, one that requires a good sense of context and an awareness of other data driving the success of your startup. A great way to do this, by the way, is using the Grasshopper Herder experiment template. This is also available as a bonus for Launch Tomorrow buyers. Intuition+imagination: the secret sauce So even the scientific process requires intuition–to choose and formulate hypotheses with practical implications. What’s more, these come from a worthwhile vision, based on what’s possible for you and your founding team. << Help Yo' Friends The 80/20 of Lean Startup January 14, 2016 by LaunchTomorrow 1 Comment January 14, 2016 by LaunchTomorrow 1 Comment 80% of results from 20% of the resources Remember the 80/20 rule? You get 80% of the results from 20% of the inputs. Well, 80% of the results from using Lean Startup come from running experiments, in my opinion. Meticulously. First, focus around the product; prove your value hypotheses. Then, focus on growth; prove your growth hypotheses. If you aren’t running experiments, you’re not learning. You’re doing what Ashley Aitken from Australia calls “Faux Lean Startup”. For example, think back to how tech startups launched in the dotcom days. Old Traditional Way Faux Lean Startup Business Plan Lean Canvas Market Research Validate solution Waterfall Project Management “Agile” Alpha -> Beta -> v1.0 Alpha -> MVP -> v1.0 Private Beta testing Early Adopters Launch Product Launch Product Get market feedback Get market feedback Faux Lean Startup means you replaced some of the dotcom era tools with “lean approved” tools. If so, you’re missing the point of lean startup. Lean Startup isn’t only about customer development, personas, and interviewing customers. UXers have been doing that for decades before Lean Startup existed. In my mind, the classic example of this is using “MVP” as a synonym for beta product. Beta software means that you still expect to have missed some bugs in the software. Your unknowns are technical. An MVP is completely different animal than a beta product. It’s created to test initially one business assumption. With MVPs, you test unknowns on the business side—not technical ones. 80% of the results from using Lean Startup come from running experiments. Click To Tweet It’s easy to fall back into the old pattern of avoiding what you don’t want to hear. It’s only human. You need to be deliberate about learning, in order to really get valuable feedback. A true lean startup implementation requires you to be running a long series of experiments (at least 1 per week). Minimize your own biases. Maximize learning when you know the least. See things how they are, based on data you gather from the market. That’s the 80/20 of Lean Startup. << Help Yo' Friends 1 2 3 4 Next Page »
Dropbox explainer video? You’re missing most of their Lean Startup story… March 21, 2016 by LaunchTomorrow 1 Comment Not launching can be painful, but not learning can be fatal – Drew Houstan, Founding CEO of Dropbox “Not launching can be painful, but not learning can be fatal.” That’s how Drew Houston, the founding CEO of Dropbox, describes his team’s approach. Dropbox, if you haven’t heard of them, does file-sharing in the cloud. And they’re famous for the Dropbox explainer video which got them lots of eyeballs on Hacker News, as the basis for explosive growth later. They give users access to the same files on all devices, regardless of where they are used. After interviewing their geeky friends in MIT dorm rooms, the Dropbox founders wanted “write once, read anywhere” file management to apply to all files. Not just a software developer’s source code. They started working on developing a working prototype to solve their own problem. When Dropbox launched, a number of cloud storage competitors with deep pockets already existed: Google Drive, iCloud, AWS, Carbonite, to name a few. To get their product off the ground, Dropbox had to be different. And simple to understand. Hypothesis: Latent demand for product concept X makes product development worthwhile Test type: Value Hypothesis Success Criteria: Able to get signups based on description Simplicity captures interest Traffic Source: social media and online communities If we build it, how will we acquire traffic?Who exactly will be interested?Are they interested enough to establish a relationship with us?What kind of conversion rates do we get? Result: Pass. The team was able to acquire traffic that converted based on a description on a landing page. At the same time, they were building prototypes to assess the technical viability of the product idea. As this initial market test proved that some pent up demand existed, the team dug further. Step Back: This test established baselines, which could then be used to explore the product presentation further. Also, it helped them reach out and establish contact with their market, independently of their immediate personal network, thus providing a slightly more unbiased signal. Also, using such a page potentially allowed them to test a path to market, to locate their most rabid fans. Hypothesis: Variation X meaningfully positions the product against customer alternatives As part of their application to YCombinator, Dropbox really wanted to get Paul Graham’s attention. So they created a video aimed at attracting early adopters. The goal was to explain the product concept as a story. The core differentiator (big idea as they call it) was “write once, read anywhere”. Make changes to any copy, and all copies are updated with the same changes. Test Type: Value or Growth hypothesis Success Criteria: signups > X or conversion rate > X% (established previously) — more accurately in Dropbox’s case it was to get accepted into the YCombinator accelerator program. Create an explainer video for your complicated new product. Make sure your audience understands it, without being overwhelmed by technical details. Contact Us or call us now at: US/Canada: +1 202 949 4478 UK: +44 773 952 7708 EU: +48 692 870 297 Traffic Source: HackerNews, link to landing page with demo video targeted at early adopters Result: Pass. Built list of around 5,000 interested prospects. The problem description resonated strongly with their targeted early adopter audience. Step Back: Moving to a video format helped get across the product’s story more effectively. This made it more accessible to everyone. In and of itself, an explainer video is just a change in format–similar to how a bestselling book is treated as a candidate to become a blockbuster movie. Ultimately, a landing page that tests demand much get across the story well. The medium for telling that story is secondary to the relevance and quality of that story to the target audience. As Dropbox’s initial target audience was quite technical, this explanation mapped to a number of tools and behaviors they already knew from software development. And they were more than happy to run with the write once, read anywhere concept. Arguably, this niche perceived other cloud storage as defective because it didn’t support this kind of functionality. As a result, they needed little convincing and persuasion to hop on board. Hypothesis: Paid search can be a profitable engine of growth for Dropbox Test Type: Growth hypothesis Success Criteria: Customer Value > Customer Acquisition Cost Traffic Source: Paid search engine marketing Dropbox experimented with using paid acquisition on a landing page. This is not an early-stage landing page MVP test. It’s an attempt to figure out what will grow the company, not whether the product idea is attractive. They hired an experienced search engine marketer, who went out and made landing pages. On those pages, they hid the free option, replacing it with a free time-limited trial. AdWords interface is showing incorrect campaign conversation numbers. Difference of numbers are computed here. [images:Dropbox] Total ad spend: approximately $3,000 in the image above In their words, here were the problems they faced: The most obvious keywords were expensive.Long-tail had low volume.Hiding the free option was shady, confusing and buggy.The conversion numbers on Google’s dashboard were inaccurate. Result: Fail. As their cost of acquisition at the time was at least $233 for a $99 product, the experiment to test paid acquisition as a profitable traffic source failed. Based on the economics of paid search, pay-per-click didn’t look like a viable growth strategy for Dropbox. Step back: Even though PPC as a source of traffic didn’t work for them, it solidified their confidence in their ability to retain customers. If people bought, their subscription retention rate was over 75%. In short, they had a great product their community loved, and they had product market fit. In Drew Houston’s words, this meant that “product-market fit cures the many sins of management.” After this idea failed, Dropbox created a famous explainer video which went viral, thus proving that a viral engine of growth was better for them. Hypothesis: A simple value proposition resonates more with the target audience than a complex one Test Type: Value hypothesis Success Criteria: Conversion rate > initial conversion rate Traffic Source: Paid search engine marketing Simpler the better! Result: Pass. Simple and concise converts better, as does having a clear call-to-action for the next step. Step back: This test type is taken out of the traditional toolbox of conversion rate optimization (CRO). The idea is clear to the founders. They want to communicate it as concisely and effectively as possible to their target audience. Even if they move to a different traffic source later (as Dropbox did), a clear and powerful value proposition ensures a high conversion rate for all further marketing efforts, including free traffic sources. Moving too early into this kind of testing can be a type of premature optimization. The Takeaways The key tactic Dropbox used was to to test both market and technical viability simultaneously. In addition, they did a number of smaller tests. Each test checked a much smaller piece of the bigger puzzle. This required them to break down the overall vision into discrete tests which they built and ran. Build -> Measure -> Learn By running a series of experiments, Dropbox stayed with the ethos of MVP=experiment. Each cycle around the Build, Measure, Learn loop gave them greater insight. Each step they took tested something new about their target market and their product. As a result the product evolved very quickly, because the team gathered actionable yet counter-intuitive data. This helped them build a strong USP (unique selling proposition) in a crowded marketplace using technology that was theoretically possible but unproven. There is a lot more to a minimum viable product than just a beta software release. This post aims to make that clearer. << Help Yo' Friends Empty pocket testing by Buffer March 2, 2016 by LaunchTomorrow Leave a Comment March 2, 2016 by LaunchTomorrow Leave a Comment Tweet or Posts on a social media at a pre-defined schedule The following are a number of Lean Startup validation case studies. Some will already be well known; some will be completely unknown. A lot of landing page testing has happened since The Lean Startup was being pieced together by Eric Reis. These are retrospective reconstructions of what happened using landing pages as vehicles for minimum viable products. For example, Buffer did empty pocket testing with a landing page before building their product. Just for your context, Buffer is a social media sharing tool, allowing you to publish tweets or social media posts on a pre-defined schedule. While you may have heard of some of the lean landing page case studies before, there is a lot of nuance in exactly what each test actually tested. They are typically not “traditional” A/B split tests, where they were testing whether a new variation of an ad or landing page beat the old one. In order to help make it more explicit, I’ve tried reformatting the experiments to be lightweight. Lean Startup experiments are generally not about testing the landing page or the product, but the business ideas they represent. Hypothesis: The target audience wants this product Test Type: Value hypothesis, confirm the problem exists and people want a “hands off” way to tweet Success Criteria: Emails gathered > 0 Traffic Source: Social media, word of mouth [image:Buffer] Result: Pass. A few people used it to give founder Joel Gascoigne their email. He used these to get some useful feedback and initiate a conversation with prospects. Step back: Potential users had left their email address at a random web page promising them help with this particular problem. This meant the idea itself was valuable, and there was potentially unmet demand for this particular idea. I would be careful to use only # of emails gathered as the primary metric in all cases. For a consumer facing product, this is probably good enough, assuming you have enough traffic. It would be better to also include some kind of a target number of sessions, to make sure that you have enough “attempts to convert” to make your metric meaningful. Hypothesis: The target audience is willing to pay for the product Test Type: Value hypothesis, confirm declared willingness to pay for “a way to automate their tweeting” Success Criteria: People would click-through the additional pricing page, and still leave their email. Traffic Source: Social media, word of mouth Pricing packages of this social media tool [image:Buffer] Result: Pass. People were still clicking through this additional step. Joel was able to gather useful information about the suggested pricing plans, in order to figure out pricing. Step back: Potential users weren’t put off by the blatant pitch, and still kept leaving an email address at the far end. What Joel hadn’t tested was whether people would actually buy; however, he was able to complete a functional prototype within seven weeks, and tested this hypothesis with a functional system. He actually got his first paying customer 4 days after the “rough-around-the-edges” product launch. I just wanted to thank Joel for contributing this fantastic case study to the Lean Startup community. It’s quite a well known one. As a result, I really wanted to cover it as an example of a line of thinking that’s worth following. Empty Pocket Testing At the core, the Buffer landing page MVP test was meant to address a major question for founders: will “they” buy it, if I create it? Before getting caught up in theoretical debates about what is and isn’t an MVP, Buffer just did an experiment. It just happened to be using a landing page to address a major risk factor for a new product business. In this particular case, checking for whether early adopters had the budget and were willing to spend it de-risked spending more time and money on the solution significantly. Even though they were asking theoretically, this helped to validate their sense that their target early adopters would be willing to pay. People are willing to pay out of their pockets for this tool. [image:Dan Moyle] If you’d like to see a number of case studies like the above, grab Launch Tomorrow. I’m updating it in an upcoming version with a lot of in-depth experiments that have been run. << Help Yo' Friends Lean Startup 101 January 21, 2016 by LaunchTomorrow Leave a Comment January 21, 2016 by LaunchTomorrow Leave a Comment Focused learning is the fastest way to validate a product idea. Lean Startup is based on the scientific process, albeit more business focused, focused on learning fast when you know little. It’s the fastest known way to validate a product idea. Consider this a Lean Startup 101 introduction to how to use it in your business. The first step of choosing a hypothesis to test is to map out your business with canvases. Business Model Canvas for the big picture, and any of the following three to figure out problem-solution fit: Lean Canvas for the product Javelin Validation Board Value Proposition Canvas After you’ve settled on a vision you’re happy to start testing, you prioritize what will give you the strongest boost in confidence. You try to get data confirming it’s true. Let’s say you’ve now settled on a specific assumption you’d like to test. Which means you need to map a fuzzy concept down to one metric. Choose a Metric Choosing this one metric is the next step (and one which is largely driven by intuition). Which number represents what you want to change? Or captures what you want to monitor? This requires some analysis, but also some subjective skill. For example, do you want to increase sales or profits? There aren’t really clear answers, because it depends on where you are and what you want to achieve. A venture funded startup interested in growth at all costs is happy to be unprofitable, as long their growth is up and to the right. A bootstrapper will be monitoring cash flows and profits like a hawk. If you are just exploring an idea and a new market, gather data to confirm that the external environment matches your assumptions. For example, one startup I worked with was testing out an idea about lending and borrowing DIY tools to strangers. They wanted to build a peer-to-peer lending platform for these tools. They found that consumers were much happier to lend out tools to strangers, than to ask to borrow. Once you have problem solution fit, focus on metrics that you can influence with your actions. Choose one that’s actionable. Dave McClure’s pirate metrics are useful here as a high level starting point: Acquisition, Activation, Retention, Revenue, Referral. Choose a success signpost For this particular metric, figure out what means “success” to you–before you run the experiment. This is a common point where founders trip up. What’s “meaningful”? Conversion rate hurdles are a good example. If I choose a conversion rate that’s too high, and the experiment doesn’t pass, I’ll have to give up on my idea. If I set it too low, I’ll be building a business around a bad idea and end up with a zombie product. In this case, a meaningful conversion rate is one which gives you a customer acquisition cost which is less than the expected long term value of a customer. Here’s an analogous situation I’ve seen offline. One team I worked with were gung-ho about an idea for a fitness app. They wanted to get 75% of the people they speak with confirm that the problem they wanted to solve exists. After speaking with 20 people, it turned out 30% confirmed that they had this problem. In this particular case, they’d been too aggressive about setting the success metric, as they would still be able to build a good business around 30% of a consumer market. Choosing a success metric is also very much an art, one that requires a good sense of context and an awareness of other data driving the success of your startup. A great way to do this, by the way, is using the Grasshopper Herder experiment template. This is also available as a bonus for Launch Tomorrow buyers. Intuition+imagination: the secret sauce So even the scientific process requires intuition–to choose and formulate hypotheses with practical implications. What’s more, these come from a worthwhile vision, based on what’s possible for you and your founding team. << Help Yo' Friends The 80/20 of Lean Startup January 14, 2016 by LaunchTomorrow 1 Comment January 14, 2016 by LaunchTomorrow 1 Comment 80% of results from 20% of the resources Remember the 80/20 rule? You get 80% of the results from 20% of the inputs. Well, 80% of the results from using Lean Startup come from running experiments, in my opinion. Meticulously. First, focus around the product; prove your value hypotheses. Then, focus on growth; prove your growth hypotheses. If you aren’t running experiments, you’re not learning. You’re doing what Ashley Aitken from Australia calls “Faux Lean Startup”. For example, think back to how tech startups launched in the dotcom days. Old Traditional Way Faux Lean Startup Business Plan Lean Canvas Market Research Validate solution Waterfall Project Management “Agile” Alpha -> Beta -> v1.0 Alpha -> MVP -> v1.0 Private Beta testing Early Adopters Launch Product Launch Product Get market feedback Get market feedback Faux Lean Startup means you replaced some of the dotcom era tools with “lean approved” tools. If so, you’re missing the point of lean startup. Lean Startup isn’t only about customer development, personas, and interviewing customers. UXers have been doing that for decades before Lean Startup existed. In my mind, the classic example of this is using “MVP” as a synonym for beta product. Beta software means that you still expect to have missed some bugs in the software. Your unknowns are technical. An MVP is completely different animal than a beta product. It’s created to test initially one business assumption. With MVPs, you test unknowns on the business side—not technical ones. 80% of the results from using Lean Startup come from running experiments. Click To Tweet It’s easy to fall back into the old pattern of avoiding what you don’t want to hear. It’s only human. You need to be deliberate about learning, in order to really get valuable feedback. A true lean startup implementation requires you to be running a long series of experiments (at least 1 per week). Minimize your own biases. Maximize learning when you know the least. See things how they are, based on data you gather from the market. That’s the 80/20 of Lean Startup. << Help Yo' Friends 1 2 3 4 Next Page »
Empty pocket testing by Buffer March 2, 2016 by LaunchTomorrow Leave a Comment March 2, 2016 by LaunchTomorrow Leave a Comment Tweet or Posts on a social media at a pre-defined schedule The following are a number of Lean Startup validation case studies. Some will already be well known; some will be completely unknown. A lot of landing page testing has happened since The Lean Startup was being pieced together by Eric Reis. These are retrospective reconstructions of what happened using landing pages as vehicles for minimum viable products. For example, Buffer did empty pocket testing with a landing page before building their product. Just for your context, Buffer is a social media sharing tool, allowing you to publish tweets or social media posts on a pre-defined schedule. While you may have heard of some of the lean landing page case studies before, there is a lot of nuance in exactly what each test actually tested. They are typically not “traditional” A/B split tests, where they were testing whether a new variation of an ad or landing page beat the old one. In order to help make it more explicit, I’ve tried reformatting the experiments to be lightweight. Lean Startup experiments are generally not about testing the landing page or the product, but the business ideas they represent. Hypothesis: The target audience wants this product Test Type: Value hypothesis, confirm the problem exists and people want a “hands off” way to tweet Success Criteria: Emails gathered > 0 Traffic Source: Social media, word of mouth [image:Buffer] Result: Pass. A few people used it to give founder Joel Gascoigne their email. He used these to get some useful feedback and initiate a conversation with prospects. Step back: Potential users had left their email address at a random web page promising them help with this particular problem. This meant the idea itself was valuable, and there was potentially unmet demand for this particular idea. I would be careful to use only # of emails gathered as the primary metric in all cases. For a consumer facing product, this is probably good enough, assuming you have enough traffic. It would be better to also include some kind of a target number of sessions, to make sure that you have enough “attempts to convert” to make your metric meaningful. Hypothesis: The target audience is willing to pay for the product Test Type: Value hypothesis, confirm declared willingness to pay for “a way to automate their tweeting” Success Criteria: People would click-through the additional pricing page, and still leave their email. Traffic Source: Social media, word of mouth Pricing packages of this social media tool [image:Buffer] Result: Pass. People were still clicking through this additional step. Joel was able to gather useful information about the suggested pricing plans, in order to figure out pricing. Step back: Potential users weren’t put off by the blatant pitch, and still kept leaving an email address at the far end. What Joel hadn’t tested was whether people would actually buy; however, he was able to complete a functional prototype within seven weeks, and tested this hypothesis with a functional system. He actually got his first paying customer 4 days after the “rough-around-the-edges” product launch. I just wanted to thank Joel for contributing this fantastic case study to the Lean Startup community. It’s quite a well known one. As a result, I really wanted to cover it as an example of a line of thinking that’s worth following. Empty Pocket Testing At the core, the Buffer landing page MVP test was meant to address a major question for founders: will “they” buy it, if I create it? Before getting caught up in theoretical debates about what is and isn’t an MVP, Buffer just did an experiment. It just happened to be using a landing page to address a major risk factor for a new product business. In this particular case, checking for whether early adopters had the budget and were willing to spend it de-risked spending more time and money on the solution significantly. Even though they were asking theoretically, this helped to validate their sense that their target early adopters would be willing to pay. People are willing to pay out of their pockets for this tool. [image:Dan Moyle] If you’d like to see a number of case studies like the above, grab Launch Tomorrow. I’m updating it in an upcoming version with a lot of in-depth experiments that have been run. << Help Yo' Friends Lean Startup 101 January 21, 2016 by LaunchTomorrow Leave a Comment January 21, 2016 by LaunchTomorrow Leave a Comment Focused learning is the fastest way to validate a product idea. Lean Startup is based on the scientific process, albeit more business focused, focused on learning fast when you know little. It’s the fastest known way to validate a product idea. Consider this a Lean Startup 101 introduction to how to use it in your business. The first step of choosing a hypothesis to test is to map out your business with canvases. Business Model Canvas for the big picture, and any of the following three to figure out problem-solution fit: Lean Canvas for the product Javelin Validation Board Value Proposition Canvas After you’ve settled on a vision you’re happy to start testing, you prioritize what will give you the strongest boost in confidence. You try to get data confirming it’s true. Let’s say you’ve now settled on a specific assumption you’d like to test. Which means you need to map a fuzzy concept down to one metric. Choose a Metric Choosing this one metric is the next step (and one which is largely driven by intuition). Which number represents what you want to change? Or captures what you want to monitor? This requires some analysis, but also some subjective skill. For example, do you want to increase sales or profits? There aren’t really clear answers, because it depends on where you are and what you want to achieve. A venture funded startup interested in growth at all costs is happy to be unprofitable, as long their growth is up and to the right. A bootstrapper will be monitoring cash flows and profits like a hawk. If you are just exploring an idea and a new market, gather data to confirm that the external environment matches your assumptions. For example, one startup I worked with was testing out an idea about lending and borrowing DIY tools to strangers. They wanted to build a peer-to-peer lending platform for these tools. They found that consumers were much happier to lend out tools to strangers, than to ask to borrow. Once you have problem solution fit, focus on metrics that you can influence with your actions. Choose one that’s actionable. Dave McClure’s pirate metrics are useful here as a high level starting point: Acquisition, Activation, Retention, Revenue, Referral. Choose a success signpost For this particular metric, figure out what means “success” to you–before you run the experiment. This is a common point where founders trip up. What’s “meaningful”? Conversion rate hurdles are a good example. If I choose a conversion rate that’s too high, and the experiment doesn’t pass, I’ll have to give up on my idea. If I set it too low, I’ll be building a business around a bad idea and end up with a zombie product. In this case, a meaningful conversion rate is one which gives you a customer acquisition cost which is less than the expected long term value of a customer. Here’s an analogous situation I’ve seen offline. One team I worked with were gung-ho about an idea for a fitness app. They wanted to get 75% of the people they speak with confirm that the problem they wanted to solve exists. After speaking with 20 people, it turned out 30% confirmed that they had this problem. In this particular case, they’d been too aggressive about setting the success metric, as they would still be able to build a good business around 30% of a consumer market. Choosing a success metric is also very much an art, one that requires a good sense of context and an awareness of other data driving the success of your startup. A great way to do this, by the way, is using the Grasshopper Herder experiment template. This is also available as a bonus for Launch Tomorrow buyers. Intuition+imagination: the secret sauce So even the scientific process requires intuition–to choose and formulate hypotheses with practical implications. What’s more, these come from a worthwhile vision, based on what’s possible for you and your founding team. << Help Yo' Friends The 80/20 of Lean Startup January 14, 2016 by LaunchTomorrow 1 Comment January 14, 2016 by LaunchTomorrow 1 Comment 80% of results from 20% of the resources Remember the 80/20 rule? You get 80% of the results from 20% of the inputs. Well, 80% of the results from using Lean Startup come from running experiments, in my opinion. Meticulously. First, focus around the product; prove your value hypotheses. Then, focus on growth; prove your growth hypotheses. If you aren’t running experiments, you’re not learning. You’re doing what Ashley Aitken from Australia calls “Faux Lean Startup”. For example, think back to how tech startups launched in the dotcom days. Old Traditional Way Faux Lean Startup Business Plan Lean Canvas Market Research Validate solution Waterfall Project Management “Agile” Alpha -> Beta -> v1.0 Alpha -> MVP -> v1.0 Private Beta testing Early Adopters Launch Product Launch Product Get market feedback Get market feedback Faux Lean Startup means you replaced some of the dotcom era tools with “lean approved” tools. If so, you’re missing the point of lean startup. Lean Startup isn’t only about customer development, personas, and interviewing customers. UXers have been doing that for decades before Lean Startup existed. In my mind, the classic example of this is using “MVP” as a synonym for beta product. Beta software means that you still expect to have missed some bugs in the software. Your unknowns are technical. An MVP is completely different animal than a beta product. It’s created to test initially one business assumption. With MVPs, you test unknowns on the business side—not technical ones. 80% of the results from using Lean Startup come from running experiments. Click To Tweet It’s easy to fall back into the old pattern of avoiding what you don’t want to hear. It’s only human. You need to be deliberate about learning, in order to really get valuable feedback. A true lean startup implementation requires you to be running a long series of experiments (at least 1 per week). Minimize your own biases. Maximize learning when you know the least. See things how they are, based on data you gather from the market. That’s the 80/20 of Lean Startup. << Help Yo' Friends 1 2 3 4 Next Page »
Lean Startup 101 January 21, 2016 by LaunchTomorrow Leave a Comment January 21, 2016 by LaunchTomorrow Leave a Comment Focused learning is the fastest way to validate a product idea. Lean Startup is based on the scientific process, albeit more business focused, focused on learning fast when you know little. It’s the fastest known way to validate a product idea. Consider this a Lean Startup 101 introduction to how to use it in your business. The first step of choosing a hypothesis to test is to map out your business with canvases. Business Model Canvas for the big picture, and any of the following three to figure out problem-solution fit: Lean Canvas for the product Javelin Validation Board Value Proposition Canvas After you’ve settled on a vision you’re happy to start testing, you prioritize what will give you the strongest boost in confidence. You try to get data confirming it’s true. Let’s say you’ve now settled on a specific assumption you’d like to test. Which means you need to map a fuzzy concept down to one metric. Choose a Metric Choosing this one metric is the next step (and one which is largely driven by intuition). Which number represents what you want to change? Or captures what you want to monitor? This requires some analysis, but also some subjective skill. For example, do you want to increase sales or profits? There aren’t really clear answers, because it depends on where you are and what you want to achieve. A venture funded startup interested in growth at all costs is happy to be unprofitable, as long their growth is up and to the right. A bootstrapper will be monitoring cash flows and profits like a hawk. If you are just exploring an idea and a new market, gather data to confirm that the external environment matches your assumptions. For example, one startup I worked with was testing out an idea about lending and borrowing DIY tools to strangers. They wanted to build a peer-to-peer lending platform for these tools. They found that consumers were much happier to lend out tools to strangers, than to ask to borrow. Once you have problem solution fit, focus on metrics that you can influence with your actions. Choose one that’s actionable. Dave McClure’s pirate metrics are useful here as a high level starting point: Acquisition, Activation, Retention, Revenue, Referral. Choose a success signpost For this particular metric, figure out what means “success” to you–before you run the experiment. This is a common point where founders trip up. What’s “meaningful”? Conversion rate hurdles are a good example. If I choose a conversion rate that’s too high, and the experiment doesn’t pass, I’ll have to give up on my idea. If I set it too low, I’ll be building a business around a bad idea and end up with a zombie product. In this case, a meaningful conversion rate is one which gives you a customer acquisition cost which is less than the expected long term value of a customer. Here’s an analogous situation I’ve seen offline. One team I worked with were gung-ho about an idea for a fitness app. They wanted to get 75% of the people they speak with confirm that the problem they wanted to solve exists. After speaking with 20 people, it turned out 30% confirmed that they had this problem. In this particular case, they’d been too aggressive about setting the success metric, as they would still be able to build a good business around 30% of a consumer market. Choosing a success metric is also very much an art, one that requires a good sense of context and an awareness of other data driving the success of your startup. A great way to do this, by the way, is using the Grasshopper Herder experiment template. This is also available as a bonus for Launch Tomorrow buyers. Intuition+imagination: the secret sauce So even the scientific process requires intuition–to choose and formulate hypotheses with practical implications. What’s more, these come from a worthwhile vision, based on what’s possible for you and your founding team. << Help Yo' Friends The 80/20 of Lean Startup January 14, 2016 by LaunchTomorrow 1 Comment January 14, 2016 by LaunchTomorrow 1 Comment 80% of results from 20% of the resources Remember the 80/20 rule? You get 80% of the results from 20% of the inputs. Well, 80% of the results from using Lean Startup come from running experiments, in my opinion. Meticulously. First, focus around the product; prove your value hypotheses. Then, focus on growth; prove your growth hypotheses. If you aren’t running experiments, you’re not learning. You’re doing what Ashley Aitken from Australia calls “Faux Lean Startup”. For example, think back to how tech startups launched in the dotcom days. Old Traditional Way Faux Lean Startup Business Plan Lean Canvas Market Research Validate solution Waterfall Project Management “Agile” Alpha -> Beta -> v1.0 Alpha -> MVP -> v1.0 Private Beta testing Early Adopters Launch Product Launch Product Get market feedback Get market feedback Faux Lean Startup means you replaced some of the dotcom era tools with “lean approved” tools. If so, you’re missing the point of lean startup. Lean Startup isn’t only about customer development, personas, and interviewing customers. UXers have been doing that for decades before Lean Startup existed. In my mind, the classic example of this is using “MVP” as a synonym for beta product. Beta software means that you still expect to have missed some bugs in the software. Your unknowns are technical. An MVP is completely different animal than a beta product. It’s created to test initially one business assumption. With MVPs, you test unknowns on the business side—not technical ones. 80% of the results from using Lean Startup come from running experiments. Click To Tweet It’s easy to fall back into the old pattern of avoiding what you don’t want to hear. It’s only human. You need to be deliberate about learning, in order to really get valuable feedback. A true lean startup implementation requires you to be running a long series of experiments (at least 1 per week). Minimize your own biases. Maximize learning when you know the least. See things how they are, based on data you gather from the market. That’s the 80/20 of Lean Startup. << Help Yo' Friends 1 2 3 4 Next Page »
The 80/20 of Lean Startup January 14, 2016 by LaunchTomorrow 1 Comment January 14, 2016 by LaunchTomorrow 1 Comment 80% of results from 20% of the resources Remember the 80/20 rule? You get 80% of the results from 20% of the inputs. Well, 80% of the results from using Lean Startup come from running experiments, in my opinion. Meticulously. First, focus around the product; prove your value hypotheses. Then, focus on growth; prove your growth hypotheses. If you aren’t running experiments, you’re not learning. You’re doing what Ashley Aitken from Australia calls “Faux Lean Startup”. For example, think back to how tech startups launched in the dotcom days. Old Traditional Way Faux Lean Startup Business Plan Lean Canvas Market Research Validate solution Waterfall Project Management “Agile” Alpha -> Beta -> v1.0 Alpha -> MVP -> v1.0 Private Beta testing Early Adopters Launch Product Launch Product Get market feedback Get market feedback Faux Lean Startup means you replaced some of the dotcom era tools with “lean approved” tools. If so, you’re missing the point of lean startup. Lean Startup isn’t only about customer development, personas, and interviewing customers. UXers have been doing that for decades before Lean Startup existed. In my mind, the classic example of this is using “MVP” as a synonym for beta product. Beta software means that you still expect to have missed some bugs in the software. Your unknowns are technical. An MVP is completely different animal than a beta product. It’s created to test initially one business assumption. With MVPs, you test unknowns on the business side—not technical ones. 80% of the results from using Lean Startup come from running experiments. Click To Tweet It’s easy to fall back into the old pattern of avoiding what you don’t want to hear. It’s only human. You need to be deliberate about learning, in order to really get valuable feedback. A true lean startup implementation requires you to be running a long series of experiments (at least 1 per week). Minimize your own biases. Maximize learning when you know the least. See things how they are, based on data you gather from the market. That’s the 80/20 of Lean Startup. << Help Yo' Friends 1 2 3 4 Next Page »