With every experiment that you run it is important to ask; what is the riskiest assumption and what do I want to learn. But although the theory is rather simple, it is quite hard to come up with good experiments. Experiment design is hard, especially since most of us are entrepreneurs and not trained scientists. So instead of talking too much about what good experiment design is, let’s dive into some concrete examples:
To validate the problem
Are you really trying to solve a real problem for actual people? We often tell entrepreneurs that you want to find the person who fell off his bike (We are from Amsterdam remember) and has broken his arm. His bone is sticking out and he is screaming of pain. That guy will do anything and pay anything to get his problem solved. You want to build a painkiller, not a vitamin. But how do you validate that?
The customer interview is almost always the best way to find out if you are solving a real problem. We previously wrote about how to run effective customer interviews, so we won’t get into that again.
To validate interest
Once you validated that there is an actual problem, it is wise to first test the interest in a solution, before running off and building a solution. We usually do another round of customer interviews first, but after that, there are plenty of alternatives:
A landing page is ideal to test your value proposition and validate interest. There are hundreds of ready-made templates available and lots of services to just easily create a landing page if even a template is too technical for you. What is important when using a landing page is that you ask for some kind of currency. An email address or signup is enough, but people have to ‘pay’ you something to really show their interest. Visitors and pageviews are not enough.
Dropbox got famous with their explainer video MVP. When your product is too complex to easily explain or show a prototype, an explainer video might be ideal. It was hard for the dropbox guys to show the magic of Dropbox before Dropbox existed. With an explainer video and again asking for some kind of currency, they were able to validate the interest in Dropbox. They got overwhelmed with beta signups.
Kickstart is the ideal example of any type of pre-orders. Most used for hardware, books, and music, the pre-order is perfect when the product is too hard to build as a simple version. When writing a book, it is easy to start with a landing page and give away one or two chapters after the visitor pays with their email address. After that, a pre-order works great to see how easy it is to sell a certain amount of books.
To validate your solution
When you ask a developer to build your first version, it often takes months to finish. The right frameworks need to be used, the right choices to be made so you don’t have to refactor in the future. But when you ask a developer for a prototype or proof-of-concept, it can often be done in two weeks. We love hacking something together, especially in the early stages. I’m 100% sure that the code you write in the first few years will be completely rewritten later on, even if you put a lot of time and effort into your lines.
Get that prototype hacked together with as few resources and in as little time as possible to test your solution. Only when your customers and users try your product, you really learn what they want. If it doesn’t scale, it is often a good experiment. Make use of the concierge model or Wizard of Oz (explained later in this blogpost) to remove hard to build components, use web services to outsource everything that is not your core business, and get that prototype out in two weeks.
Fake button / Smoke screens
Instead of building a feature and then see if people are actually interested, why not add a button to your product and see how many people click? When we first wanted to validate if the users of our portfolio company Study Credits were interested in different skins for their school agenda, we added a simple link saying ‘Change skin‘. We linked the event to Mixpanel and displayed a page after a click on the link explaining that we were testing a new feature. One week later we had our answer: A lot of teens wanted to change their skin. Time for the next experiment: See if and how much they were willing to pay.
Last week I spoke with a startup that is building the next generation newsreader. They put 6 weeks in developing a ‘breaking news’ section, from design and UX to real-time push notifications, only to discover that the feeds they were serving never offered any breaking news and their users didn’t want breaking news from this newsreader. Their users rather used CNN or equivalents for that. They could have prevented six weeks of work by adding a fake opt-in, a Wizard of Oz model sending push notifications, or by first interviewing their users.
So what is that concierge model I mentioned in the Prototype example? It is doing things by hand that an engineer might automate to learn more about a possible solution. When Peerby wanted to test their rental model called Peerby Go, they did not build a whole new marketplace and rental system. It was a simple landing page where it was possible to type in what you wanted to rent and request it. That request was emailed to one of the employees and that employee would pick up the phone, find the item either in their own peer to peer Peerby system or somewhere in a rental shop, negotiate a price, drive to the location to pick the item up, and bring it to the customer. It’s like having your own… concierge!
Note that because the customer knows someone is taking personal care of the job, the value proposition is significantly higher than the automated product you will probably build to scale.
Wizard of Oz
The Wizard of Oz model is comparable to the Concierge model. You use manual labor (your own hands, an intern, mechanical turk) to ‘automate’ tasks in your backend that are for now too costly to build (in time or resources). For the customer, it seems to happen automatically, but you are actually doing it by hand. The difference with the concierge model is that the customer does not know your Wizard of Oz test is done manually, while with the concierge model the customer is aware of the fact that someone is personally taking care of their needs.
The Wizard of Oz test tries to simulate the real-world implementation of the product as much as possible and has, therefore, the same value proposition. A Wizard of Oz test is meant more to validate a solution than to come up with the best way to implement a solution, something you can better test with a Concierge model.
At one of our portfolio companies we are currently testing a chatbot that is not actually a chatbot, but one of the interns sending the messages. It would have been too costly to build a chatbot and then see how effective the feature would be. Instead, one of the interns responds to the messages by hand. But don’t tell anyone! Just like the real Wizard of Oz, this is a secret.
To validate payment
In the fake button example, we wrote about the Study Credits test. We wanted to see how many people were interested in another skin for their school agenda. After validating the demand for that feature, we wanted to know how much they were willing to pay. We created a simple screen with 5 screenshots of different skins and below each screenshot was a buy button with a price. Instead of building the whole payment system, we again showed a message explaining that we were testing a feature after the user would click the buy button. We added 3 different prices for different skins to see if the price was of importance. We tracked the clicks with Mixpanel and two weeks later we had learned that probably not enough people were willing to buy skins to make it a viable revenue model. We could have spent a month building it to discover that it would not generate enough revenue, now we learned the same after a day’s work!
Your experiment can be useless if you don’t set it up right. We wrote about running a good experiment in our Continuous Experimenting blogpost.
GroundControl helps you to set up great experiments and focus on what is really the riskiest assumption. Our content database includes these 10 experiments and more, and the platform helps you as a coach. Why don’t you give it a try?