An important part of Lean Startup is running experiments. Experiments turn assumptions into learnings.
Learn and Confirm experiments
There are two types of experiments: Learn and Confirm.
Learn experiments, also called Research or Generative Experiments, are used to learn more about a certain topic. They create new assumptions.
Their counterpart is called Confirm experiments. With Confirm Experiments, you confirm whether an existing assumption is valid or invalid.
Experiments are successful when they create new learnings (in case of a Learn experiment), or when the validate or invalidate and existing assumption (with a confirm experiment).
Yes, invalidating an assumption is also a success! You have learned a valuable lesson when you unstandardised what is not true. You only fail if you did not learn anything (learn experiments) or the result of your confirm experiment is inconclusive. You either ran the wrong experiment, or the experiment wrong and you waisted valuable time and resources in doing so.
Stage Relevant Experiments
With every experiment that you run it is important to ask; what is the riskiest assumption to (in)validate or what do I need to learn first.
Usually you start by learning more about a certain subject and then run one or multiple confirm experiments, to confirm that the assumptions are valid or not.
We divided these focus areas into four stages, so that in each stage you understand what to focus on. We call these four stages: Problem, Solution, Revenue and Scale, after the thing you have to validate before moving on to the next stage.
In the Problem stage you focus on the customer segment and their problems.
In the Solution stage you try to understand what your customer segment is trying to get done (the JTBD) and what solution would solve their problem or help them fullfill their JTBD.
In the Revenue stage you focus on the reasons to buy your solution (the Value Proposition) and how you can earn money, or make revenue.
Last but not least, you focus on reaching your customers and the growth engine in the Scale stage.
Base on these stages and focus areas, we created the NEXT Canvas. So you can group your stage relevant assumptions and always understand what to focus on next.
But although the theory is rather simple, it is quite hard to come up with good experiments that would fit each of these stages.
Experiment design is hard, especially since most of us are entrepreneurs and not trained scientists. So instead of talking too much about what good experiment design is, let’s dive into some concrete examples:
Experiments to validate the problem
The main question every (corporate) startup needs to answer first is: Are you really solving a real problem for actual people?
We often tell founders that you want the customer segment with the biggest pain first, that is aware of their problem and is searching for a solution.
As an example, we tell the story that you want to find the person who fell off her bike (We are from Amsterdam remember) and has broken her arm. Her bone is sticking out and she is screaming of pain. She will do almost anything and pay almost anything to get her ‘problem’ solved.
You want to build a painkiller, not a vitamin. But how do you validate that?
The customer interview is almost always the best way to start. It is THE tool to learn more about certain topics and an easy way to validate assumptions. If we could only recommend one experiment, it would always be to get out of the building and talk to your customers. Your customers have the knowledge, you only have assumptions.
Customer Interviews are not only helpful when you are validating your problem, but a great experiment to test your business idea, no matter in what stage your startup is.
We previously wrote about how to run good customer interviews, including how to set them up and how to analyze. To do a real deep dive into customer interviews, we recommend reading The Mom Test by Rob Fitzpatrick.
Desk research gathers and interprets available information from external sources. In other words, you are using search engines, libraries, or other sources of information to discover what others already learned and what is out there relating to your idea.
This experiment is also used to get a general overview of what competitors and alternatives exist related to your idea, of customer behavior towards a problem, and of technologies that can help your idea make possible (or impossible).
Picnic in the Graveyard
With the picnic in the graveyard experiment you are interviewing founders or team members from failed startups, to learn what they learned in their journey, what they did wrong and why they decided to stop. The most valuable lessons are learned from people who have tried before and failed. Do they have any lessons that you can use to prevent failure?
Experiments to validate your solution
Once you validated that there is an actual problem, it is wise to first test the interest in a solution, before running off and building a solution. We usually do another round of customer interviews first, but after that, there are plenty of alternatives:
A landing page is ideal to test your value proposition and validate interest. There are hundreds of ready-made templates available and lots of services to just easily create a landing page if even a template is too technical for you. What is important when using a landing page is that you ask for some kind of currency. An email address or signup is enough, but people have to ‘pay’ you something to really show their interest. Visitors and pageviews are not enough.
Dropbox got famous with their explainer video MVP. When your product is too complex to easily explain or show a prototype, an explainer video might be ideal. It was hard for the dropbox guys to show the magic of Dropbox before Dropbox existed. With an explainer video and again asking for some kind of currency, they were able to validate the interest in Dropbox. They got overwhelmed with beta signups.
The most important thing of using lean startup experiments to prove your solution, is to find out if your solution is solving the customer’s problem and if they are looking for a solution.
A couple of examples to test your business idea are:
Competitor Usability Testing
Competitor usability testing is observing your customers use a competitor’s products or services to gain insights on the mindset of the user, common issues, and potential improvements in your own product.
It does not have to be a competitor, but could also be an alternative solution. Is your customer currently solving their problem with Excel, ask her to walk you through her way of working so you can learn what she is already doing.
The Paper prototype is a way of quickly testing your solution by using paper drawings, flyers or even power point presentations. Mimicking user interaction or the process of the solution to learn if this is what the customer is looking for. Does this solve their problem?
When you ask a developer to build your first version, it often takes months to finish. The right frameworks need to be used, the right choices to be made so you don’t have to refactor in the future. But when you ask a developer for a prototype or proof-of-concept, it can often be done in two weeks. We love hacking something together, especially in the early stages. I’m 100% sure that the code you write in the first few years will be completely rewritten later on, even if you put a lot of time and effort into your lines.
Get that prototype hacked together with as few resources and in as little time as possible to test your solution. Only when your customers and users try your product, you really learn what they want. If it doesn’t scale, it is often a good experiment. Make use of the concierge model or Wizard of Oz (explained later in this blogpost) to remove hard to build components, use web services to outsource everything that is not your core business, and get that prototype out in two weeks.
Fake button / Smoke screens
Instead of building a feature and then see if people are actually interested, why not add a button to your product and see how many people click? When we first wanted to validate if the users of our portfolio company Study Credits were interested in different skins for their school agenda, we added a simple link saying ‘Change skin‘. We linked the event to Mixpanel and displayed a page after a click on the link explaining that we were testing a new feature. One week later we had our answer: A lot of teens wanted to change their skin. Time for the next experiment: See if and how much they were willing to pay.
Last week I spoke with a startup that is building the next generation newsreader. They put 6 weeks in developing a ‘breaking news’ section, from design and UX to real-time push notifications, only to discover that the feeds they were serving never offered any breaking news and their users didn’t want breaking news from this newsreader. Their users rather used CNN or equivalents for that. They could have prevented six weeks of work by adding a fake opt-in, a Wizard of Oz model sending push notifications, or by first interviewing their users.
So what is that concierge model I mentioned in the Prototype example? It is doing things by hand that an engineer might automate to learn more about a possible solution. When Peerby wanted to test their rental model called Peerby Go, they did not build a whole new marketplace and rental system. It was a simple landing page where it was possible to type in what you wanted to rent and request it. That request was emailed to one of the employees and that employee would pick up the phone, find the item either in their own peer to peer Peerby system or somewhere in a rental shop, negotiate a price, drive to the location to pick the item up, and bring it to the customer. It’s like having your own… concierge!
Note that because the customer knows someone is taking personal care of the job, the value proposition is significantly higher than the automated product you will probably build to scale.
The Concierge model experiment is the Learn Experiment variant of the Wizard of Oz experiment, which is used to confirm the lessons that you learned running a concierge model. By manually operating your service together with your customer, you learn a lot fo what works and what not. If you then turn your solution into a Wizard of Oz fake automation experiment, you can confirm if your assumptions also work when you have “automated” your solution.
Wizard of Oz
The Wizard of Oz model is comparable to the Concierge model. You use manual labor (your own hands, an intern, mechanical turk) to ‘automate’ tasks in your backend that are for now too costly to build (in time or resources). For the customer, it seems to happen automatically, but you are actually doing it by hand. The difference with the concierge model is that the customer does not know your Wizard of Oz test is done manually, while with the concierge model the customer is aware of the fact that someone is personally taking care of their needs.
The Wizard of Oz test tries to simulate the real-world implementation of the product as much as possible and has, therefore, the same value proposition. A Wizard of Oz test is meant more to validate a solution than to come up with the best way to implement a solution, something you can better test with a Concierge model.
At one of our portfolio companies we are currently testing a chatbot that is not actually a chatbot, but one of the interns sending the messages. It would have been too costly to build a chatbot and then see how effective the feature would be. Instead, one of the interns responds to the messages by hand. But don’t tell anyone! Just like the real Wizard of Oz, this is a secret.
Read more about the difference between the Concierge and a Wizard of Oz experiment.
The Wizard of Oz experiment is usually used to confirm assumptions you have about your solution before building the real deal. You can generate these assumptions via a Concierge model or by doing Solution Interviews for example.
Experiments to validate payment
Kickstart is the ideal example of any type of pre-orders. Most used for hardware, books, and music, the pre-order is perfect when the product is too hard to build as a simple version. When writing a book, it is easy to start with a landing page and give away one or two chapters after the visitor pays with their email address. After that, a pre-order works great to see how easy it is to sell a certain amount of books.
In the fake button example, we wrote about the Study Credits test. We wanted to see how many people were interested in another skin for their school agenda. After validating the demand for that feature, we wanted to know how much they were willing to pay. We created a simple screen with 5 screenshots of different skins and below each screenshot was a buy button with a price. Instead of building the whole payment system, we again showed a message explaining that we were testing a feature after the user would click the buy button. We added 3 different prices for different skins to see if the price was of importance. We tracked the clicks with Mixpanel and two weeks later we had learned that probably not enough people were willing to buy skins to make it a viable revenue model. We could have spent a month building it to discover that it would not generate enough revenue, now we learned the same after a day’s work!
Your experiment can be useless if you don’t set it up right. We wrote about running a good experiment in our Continuous Experimenting blogpost.
PS. Yes you counted it right, this blogposts has more than 10 examples. We originally wrote it with 10 examples in mind, but expanded the post a bit to include the Concierge and Wizard of Oz example and talk about Customer Interviews. Good catch!