A while ago I started reading the book The Lean Startup – How Today’s Entrepreneurs Use Continuous Innovation to Create Radically Successful Businesses from Eric Ries. Some of my colleagues already propagated some of the insights from the Lean Startup, and I heard about the Lean Startup for the first time back in November 2010 while attending a workshop with Kent Beck (see a write-up here).

I wasn’t aware that I read about some contents of the book from a different perspective. Back in 2009 Michael Bolton and James Bach reported on testing an internet application which was from their perspective more than buggy. Even continuous deployment didn’t help much there. The company, IMVU, was mentioned throughout the book by Eric. So, I was curious now about the connection between Lean Startups and Testing. Despite the chapter with the name “Test” in it, I wasn’t surprised that I have to make that connection myself.

Since Phil Kirmham asked for a perspective from my point when he saw I was reading the book, I wanted to share my insights I got from the book, and my vision for testing in a Lean Startup.

Build, Measure, Learn

The main focus of the Lean Startup method relies on the Build, Measure, Learn loop. The main approach to building your product is based on the concept of validated learning. This also drives the measure and learning part. Once the first version is released, we can decide whether or not to change our strategy based on our hypothesis that we projected before building a single line of code. If our projections don’t fit in, we have to find a new way to go with our technology. Otherwise we may continue our approach.

Validated Learning

At the core of Ries’ approach is validated learning. Instead of hoping that you hit the right market, he challenges the reader to look for the right hypothesis – the growth hypothesis for your future market, and the value hypothesis. The value hypothesis puts to numbers what you expect to happen once you release the first version to the market. How much are customers going to pay? How much value can we generate with this product? Maybe, within the first month. The growth hypothesis puts to numbers the growth expectation within our market. Are we going to make a thousand new sign-ups with our next set of features? Or are we going for a million new downloads?

Of course, putting down our hypothesis before we build a single code line, is just one part of the mix. We need to come up with actionable metrics so that we can track what actually happens, and whether or not the market we picked first is the right one to grow in the long-term.

With actionable metrics Ries describes that these should lead you directly to the actions you need to take. Instead of tracking the number of downloads of your products, or the amount of accounts that you have on your system, you should track the percentages. Dividing your customer base in so called cohorts helps you to come to action once your expectations match the actual numbers you get from your tracking data.

At a client I am currently involved in in a startup product. We came up with actionable metrics for our customer base. It’s a mobile phone application. The things we want to track are how many downloads the app has, how many users install it, and start it for the first time, how many people actually create an account in the service, and how many people connect to a third-party network with our application. This customer segmentation will tell us whether our growth theory matches or not. If there are just 1% of the people signing up to the service, we are truly on the wrong path.

Pivot or persevere

Both hypothesis build the foundation for validated learning. By noting down our expectations before releasing or building anything, and by tracking the numbers after the release, we can decide whether we are on the right track or not. Ones our expected numbers and our actual numbers match, we should try to raise the next question, and keep our current strategy. If the numbers disagree with our previous expectation, we know that we are on a dead end, and need to change the direction. The first decision is called to persever, the second one to pivot.

For a pivot we want to preserve what we have build so far to some degree, but we also want to change our course. Ries describes several such changes in strategy for the product that you are building. You might want to change to a different customer base, or you might want to concentrate on one particular segment. You may want to change the directions based on the feedback that you received.

Ries cites several products that initially started to focus on a different market segment, but made several such pivots in order to become really awesome. Once you really found out what your customers need, you can follow this path to conquer your market.

Testers in a Lean Startup

Now, what would it mean to work as a tester in a Lean Startup? What would I pursuit in order to serve our company?

First of all I am convinced that a checking tester would not help the situation much. There are no test scripts that can help to find out more about the market or your future business in the hypothesis. The same goes for fake-testing, but I don’t think that fake-testing serves any purpose at all.

A real tester would help to identify underlying assumptions. These assumptions guide your validated learning. A good tester is trained to challenge assumptions well. He or she knows about the right questions to ask in order to validate the assumptions. Based on these questions, you can drive forward the motivation for the guiding metrics as well as build the growth and value hypothesis.

An example would be helpful right now. Taking from the same client as before, we sat together in a meeting to discuss the course of our product backlog. We started to challenge different assumptions. One thing was to challenge the need for a login functionality. So far we had followed the path to build a necessary login and register function for the service. But challenging the underlying assumption, that such a login function would not be needed in the first version, led to the conclusion that we might lose the necessary feedback in our customer cohorts. Many customers would not sign up for the service, probably. So, with the function we would end up with biased metrics, and we could draw the wrong conclusions from this, leading us to the wrong decision to persevere or lead to the wrong pivots in the future.

Another point where testers can bring great value to Lean Startups lies in our testing abilities. We need to take into account the current target of our product, and be aware of the degree that we need to test the product. It’s actually ok to test for less in this release of the minimal viable product, maybe, since we have several questions to answer before our next decision point about persevere or pivot. We should be aware of functions that do not yet exist, but are on the road map for future versions.


I think the uprising of methods like Kanban and the Lean Startup show once again that the time of fake-testing and checking testers will come to an end. I am convinced that more and more actual testers will bring forward the word of challenging assumptions, and ask the right questions about the future of our markets.

Though, we will have to be patient while testing in a Lean Startup. We need to know about the degree of fulfillment in this version of our product, and what the right questions to ask are. I think we are on a good path with this right, though we probably have to convince the startups out there about the value that we can bring them. James Bach’s and Michael Bolton’s critique on IMVU went in the direction of a useless product. I think that proper exploration of early prototypes also help you make the right decisions for the future. Startups need to be aware of the liabilities of the product in the long-run. Test automation does not reveal much information about this, but exploration of the product may guide future versions of the product as well.

PrintDiggStumbleUpondel.icio.usFacebookTwitterLinkedInGoogle Bookmarks