WIRED

Your A/B testing isn't working nearly as well as you think

Your A/B testing isn't working nearly as well as you think

Recent advances in statistical methods and analytics have given marketers a far more powerful and sophisticated technique through experimental design.

  • min read

Article

Your A/B testing isn't working nearly as well as you think
en

This article originally appeared in WIRED.

A/B testing is nothing new. It has been a staple of direct marketing campaigns for decades: Before the web, it was catalog mailers and infomercials; since coming online, it’s been used for improving websites (organizations such as Google, Amazon, and the Obama presidential campaign are famous for doing so) as well as apps, and even changing the way people write code.

Some argue that A/B testing — which diverts a handful of users to a slightly different version of the product to find out if the new version provides better results — is not just a best practice but “a way of thinking, and for some, even a philosophy.”

Whatever the belief, however, it’s clear that A/B testing has had, and continues to have, a significant impact on Silicon Valley and beyond. It is changing the way we do business. The question is, when does A/B testing actually stop you from C (sorry!) — seeing — enough?

It’s clear that its very modularity can cause problems. But what about in cases where the number of tests that can be run at once is low? While A/B testing makes sense on big websites where you can run hundreds of tests per day and have hundreds of thousands of hits, only a few offers can be tested at one time in cases like direct mail. The variance that these tests reveal is often so low that any meaningful statistical analysis is impossible.

Worse, the results don’t identify which variables caused consumers to respond.

As a result, response rates for emails, catalogs, and other direct marketing campaign methods — still a staple of many businesses — are very low — usually less than 5% and often less than 0.5% — and they’re declining.

A/B testing has severe limitations in these cases. But there is a better way. Recent advances in statistical methods and analytics have given marketers a far more powerful and sophisticated technique through experimental design. Experimental design works best with companies that market directly to a large number of customers, such as telecommunications firms, banks, online retailers, and credit card providers.

Experimental design massively and deliberately increases the amount of variance in direct marketing campaigns, allowing businesses to project the impact of many variables (product offers, messages, incentives, mail formats, and so on) by testing just a few of them. How? Mathematical formulas use combinations of variables as proxies for the complexity of all the original variables.

That allows businesses to quickly adjust messages and offers and, based on the responses, to improve campaign effectiveness not to mention the overall economics. We have seen experimental-design-based, multivariate marketing campaigns increase consumer response rates by three to eight times, adding hundreds of millions of dollars to the top and bottom lines.

One telecommunications service provider was mailing to several million households every quarter, and response and conversion rates were declining. The telecoms company tested 18 variables including formats, promotions and messages, and then launched 32 marketing offers simultaneously to the target customer segment. At the end of the campaign, the company modeled response rates for every possible combination of variables (576 in all) — including combinations that hadn’t actually been launched in the market. The best offers achieved three to four times the response rate of the existing champion offer.

Perhaps more importantly, the organization learned which variables cause consumers to respond. In fact, the test uncovered unexpected results. For example, the company expected that the “richest” offers — such as those offering customers expensive equipment — would spur the highest response rates. It found that those offers performed worse than others that would cost the company far less. It turned out that factors that sparked the highest response rates included the promotional period, the format of the mail piece, and the message content.

The campaign ended up converting a much higher proportion of customers to high-value packages, which increased average revenue per user (ARPU) by 20%. This would not have been possible with an A/B testing approach.

Of course, experimental design alone doesn’t make a business more effective. It has to be coupled with improvements in other areas of the organization:

Capabilities. Besides the obvious need for some experts in statistical modeling, successful experimental design also means companies have to develop the skills to draw up meaningful customer segments based on needs and behavior. At the telecoms company, one segment consisted of families who wanted to be able to have services in any room. Targeting this segment with messages about technology that let them do that improved response rates. But another group of young households weren’t impressed — they valued simplicity and lower prices instead. This kind of insight, not just on straight demographics like location and income, is what allows the business to develop relevant messages, offers, and incentives.

Training. Launching multivariate tests efficiently and making sure the resulting insights get used in subsequent campaigns usually requires some new internal processes and training. Salespeople and call-center agents may need new scripts to help them manage customer calls in response to different offers, or to effectively upsell customers to the highest-value products.

Decision-making. Based on the financial modeling, companies should put in place financial thresholds, such as profitability targets, that serve as guardrails for subsequent campaigns. These thresholds help speed up decision-making and create a repeatable, efficient, test-and-learn model.

The rapid spread of mobile devices and social networks have given businesses more communication alternatives than ever before. That makes for greater opportunities in direct marketing — but only if companies can uncover which attributes of the campaign actually influence customer behavior.

By harnessing the power of massive variance, experimental design matches exactly the right offer with the right customer — from A to Z, not just A or B.

John Senior and Eric Almquist are partners at Bain & Company, a global management consulting firm. Senior is based in New York and Almquist is based in Boston.

Tags

Ready to talk?

We work with ambitious leaders who want to define the future, not hide from it. Together, we achieve extraordinary outcomes.