I’ve been reading 2 interesting books recently.
“The Lean Startup” by Eric Ries is a great book about how to do product management for a new product. In his book, he talks about the development cycle: Build Measure and Learn.
- Build, Measure Learn. A new products goal is to repeat the build measure learn cycle as quickly as possible. Build a new feature, Measure the user response to that feature – did they want it? Did they use it in the way you expected them to?
If you measured the right things, you’ll know more about what users want. For each feature you can now improve the feature , keep it as is, or discard it. As you learn more about what users really want from your product, you are better able to predict which new features to build.
The quicker you successfully complete a build measure learn cycle, the closer you are to having a successful product.
- Your assumptions about what the audience wants are probably wrong. What a relief! Seriously, it becomes easier to reach agreement between stakeholders, if you able to admit that nobody knows what the right feature to build is. We can hopefully make an educated guess, but that is all. Until its built and you see users using it, no-one knows for sure.
I know from my own experience that a design always changes following user testing. You think you’ve delivered a great design, and there’s always something that needs to improve because users don’t want it, or don’t know how to use it.
Eric Ries recommends a book called ‘The Other Side Of Innovation‘ (written by Vijay Govindarajan and Chris Trimble) which is a great book for those of us trying to improve innovation within an existing (large) organisations.
In ‘The Other Side Of Innovation’ the authors convinced me that in order to complete the measure and learn part of the cycle you need to create ‘hypotheses of record’.
School science projects taught us about hypotheses, so this is quite an easy concept to understand.
Creating a hypotheses of record for a product
A hypothesis for product development needs to be focused on assumptions about user behavior. It also needs to be derived from the core business goals. In many businesses, the business is trying to provide a product or service that users want. By providing something that users want, the business can sell more products or services to the users. The business goal is to get more revenue and profit, which it achieves by selling more products and services to users.
In the BBCs case, in my opinion, the business goal is to increase audience reach and engagement so that when a user is asked to purchase a TV license (which funds the BBC) they feel it represents excellent value for money. There are of course additional aims, notably to educate, inform and entertain, but without achieving reach and engagement, the organisation cannot hope to achieve these subsequent aims.
An example hypothesis for the new BBC homepage might be:
Hypothesis: Given the user experience of the new BBC homepage we expect most users to use the carousel buttons at the top of the page.
What does success looklike? If, with time, more visitors (as a proportion of the total visitors) to www.bbc.co.uk are using the carousel buttons then the design works as well as we hoped. As a subsequent success we would expect to see more clicks through to items within the carousel. Each click through to an item in the carousel represents an increase in engagement. If we see a low proportion of users or a decreasing proportion of users engage with the carousel over time, then we think the carousel design needs to change.
Other parts of a products design can similarly be considered – For a particular feature, what is the user behavior we expect, if that feature works? Which of our goals is this feature going to improve? If we can’t agree with a stakeholder about which business goal this feature will improve, perhaps this feature isn’t as needed as we thought it was!
Writing down a set of hypotheses and how each hypothesis will be tested, forms a “Hypotheses of record”. Once a hypothesis is tested, and the metrics obtained, then we can check our records to see if our hypothesis is correct.
Priorisation of hypotheses
Some of our assumptions are high risk and high cost (ie we don’t know if the audience want this new feature, but in order to deliver this feature it costs a lot of money). These high risk high cost features are the hypotheses we should focus on testing first. If the hypothesis about the user is wrong, we’ve spent less money developing it if we test it sooner.
[Testing a feature can be just interviewing users with a paper illustration of the feature and asking them if this feature is something they would use? Would it make them likely to use the product more often?]
Hypotheses of record in my experience
Recently ive been trying to apply this in my work (building the new bbc.co.uk/radio).
The combination of the ideas above has been very interesting.
In my experience, the conversation between stakeholders and development team can get stressful – particularly when it comes to agreeing requirements.
Typically person A insists that they need this feature, and person B insists that they don’t need that feature, and there is very little data with which to make either case.
Either person might persuade the other that their case is valid, and therefore agree the new functionality required.
However, the conversation and persuasion can be hard, complex and slow.
A “Hypotheses of Record” allows us to complete our measure and learn parts of the Build Measure Learn cycle: We can:
* Deduce what we need to measure for the hypothesis we are testing.
* Agree what success or failure results look like. We’re not going to base our decisions on a snapshot of results from our testing. We are looking for a trend in results over time.
* and learn from the results through proving or adjusting our hypotheses.
In the BBC homepage example, we might see only a few users navigating with the carousel when it launches, but if the number of users navigating the carousel is increasing significantly with time, we might say that the new carousel feature succeeds as a design. If we see the opposite trend, we might say that the carousel design is not working.
Each time we prove or disprove a hypothesis we have more accurate information, which helps us decide what features users want, and therefore what to build next.
This framework focuses stakeholders and product management on the assumptions in the current approach, and the desired user behavior if the product is a success.
These are very very powerful points to focus on – in my experience they shift the conversation from a sometimes delicate negotiation, to a more easily agreed route forward, where testing and evidence from the audience will be used to determine improvements to the product. When stakeholders and product management disagree about a feature, we can still agree the test and the user behavior we want to achieve, and therefore find a positive way forward.
Its still early days in applying this approach, but trying to “Build Measure and Learn”, and creating a “Hypothesis of Record” is proving very valuable in focusing effort and gaining agreement between product management and stakeholders.