Notes for (Designing Up the Confidence Curve)

Presenter(s):  Yohanes Frezgi

Date-Time:  August 16, 2019 @ 11:30 AM

Key Takeaways

Figure out the problem you want to solve and decide if the Confidence Curve is useful for your situation.


In this talk Yohanes Frezgi will cover what the confidence curve is, how to get organizational traction for getting your design to production, and walk through an Audible case study.

The Confidence Curve begins with answering questions about your design (“is this going to work?”) confidently and with supporting evidence. Provide internal validation such as saying that the design will increase CS tickets by 35% and lower Customer Acquisition cost on iOS from $69 to $44. Show stakeholder support by gathering marketing and customer support and showing that they are aligned with this solution, which validates findings. External validation is also important, for example stating that customers were 44% more likely to sign up for a subscription compared to historic averages. The Confidence Curve starts with the problem at hand. To start going up on the Confidence Curve you must start with internal validation, then move to research (don’t bother with a sketch unless you’ve identified and validated a problem), continue on to prototyping (excellent ammunition for investment support), then go to development and refinement, live testing and organization, and finally making the full launch.

The Confidence Curve takes a lot of time to do right and may not always be the best tool, but in cases where there is organizational paralysis, varied team alignment, multiple stakeholders, lack of executive directive or just in medium to large organizations where these situations can occur, the Curve is a great solution. If used, the Curve allows you to have already built up support with stakeholders by the time you are ready to present the design. It also makes it less likely that you will waste time designing something that will never be built, and makes it more likely your work will be approved by building deep confidence in your solution before asking for resources.

In phase 1 of the Confidence Curve, i.e. that starting problem, the problem has to be quantifiable. It has to solve a critical customer pain point or business need and is a KPI or important to someone beyond yourself.

Phase 2 is internal validation which means validating your problem with behavior data or business metric that’s frequently monitored and gain support from stakeholders who are outside of the design realm, the more senior the better. If your problem isn’t tied to someone’s bonus, it isn’t likely to get built.

Phase 3 is customer research. Support your quantitative internal validation with qualitative customer research. Conduct interviews first and then surveys with customers who exhibited the problem you have identified. User testing is very useful and easier to executives to digest. Have pull quotes ready or videos to put faces to the problem you’ve identified. This is all possible to do on your own as a designer but may need others to assist.

Phase 4 involved prototyping: after understanding your problem design and testing a solution you should survey customers to help quantify your improvements and compare it to its existing experience. Use these results combined with the other steps to justify why this design needs to be built. Don’t waste your time and expertise designing something that isn’t important or the real issue.

Phase 5 covers development and refinement and requires you to build out your solution. Use the prior steps to define user stories and functionality needs then bring back customers who you previously interviewed to test your experience while it is still in development. Make sure to track your KPI’s!

Live testing is phase 6. Make sure you test your amazing solution! The only way to quantify your impact is to test against a control group, aka employ the scientific method. Set up your benchmark metrics of what success looks like before you launch your experiments, and define those benchmark metrics and cohort ramp up based on performance.

The final phase is the full launch. Only launch your experiment to 100% of users if it beats your control group and doesn’t adversely affect other KPI’s. Even after fully launching your experiment, consider how you can be optimized to drive further improvement.

The case study is titled “Taste of Audible”. Audible is an Amazon company, and while it is the largest audio book company in the world, it doesn’t accept In App Purchasing on iOS.

Their phase 1 problem was a low app rating because mobile users didn’t know how to sign up for a subscription, as well as large issues with CX as they couldn’t see their wishlist. Amazon wouldn’t change so they needed to find a solution elsewhere. Phase 2: internal validation found that customer acquisition costs are much higher on iOS and limits market spending and calls to CS related to now knowing how to sign up for subscription on iOS, which both appeal to stakeholders. Improving mobile iOS conversion would drive substantial revenue for the business. The 3rd phase of customer research for this case meant promoting Audible on TV, recognizing that mobile is the primary consumption device, but people would see a TV ad that said they get a book free but wouldn’t be available on the app. In phases 4, 5, and 6 they got approval, defined the experiment and benchmark metrics, and launched a limited test. The benchmarks KPI’s for experiment were based on both financial and customer success metrics. The found the average amount of time people listened, the average number of sessions, the percentage of the book they completed, and how many signed up for a monthly subscription. The results were successes for both the business and the customers. They launched a 100% release to US and then Germany and on from there. The app rating is now 4.9 stars.

The session concluded with a short Q&A, some of those are listed below.

How do you ensure change between the variant 1 group versus different groups?

You should change variables between groups for example in the Audible case study we used samples of books versus an entire free book. The key metric was are customers actually listening to a book and do they sign up for subs.

Should you use cost benefit analysis when using the Confidence Curve?

With Audible it was ok, but with other business units may not be. Amazon allowed us to not overhaul and have to create in app purchasing.