Sometimes, the tyranny of choice can be overwhelming. Consider multivariate in-market testing. The broad range of variables that need to be analyzed—product, channel, messaging, incentive, format, brand and more—can make in-market testing a challenge.
Any organization has limited resources, and different business units frequently have different operational goals. The organization has to sort out which business needs take priority, and to figure out how to use resources from the different units efficiently. Moreover, even with the most sophisticated multivariate tests, there are limits to the number of elements and variations one can test in a single campaign. As efficient as multivariate campaigns are, they still exhaust accessible sample sizes very quickly, and those sizes are further limited by the need for control and other outside-the-experiment cells. For instance, marketers may choose to hold out a sample that’s either not exposed to any campaigns or exposed to a historical champion that serves as a benchmark.
Success with advanced analytics requires both technical know-how and a thoughtful approach. In this series, Bain's experts offer practical advice on some of the most common data issues.
So how can companies optimize their multivariate in-market tests?
Our experience suggests a clear solution: Plan and execute a series of tests instead of one test. Multiple tests dissolve the issue of priorities, because business people can test almost everything they want. The conversation shifts from what to test to when to test it. The results of test 1 provide feedback that informs the makeup of test 2, and so on. When sequencing tests, you still have to think about which test attributes work well together and which don’t. But you don’t have to worry about leaving out certain elements.
A company interested in testing different products, messages, incentives, channels and segments can design a series of minimultivariate campaigns where each campaign has its own focus. Marketers would sequentially test each of these elements, incorporating lessons from previous campaigns into subsequent campaigns.
For example, you would start with a product campaign, assuming that’s the most important of all elements. You would focus on learning the main effects first, where you test only the effect of each individual product feature without worrying about feature interactions (that is, whether the effects of two features together is more or less than the sum of the two parts). This simplifies the scope, so you can fit the desired attributes into a single multivariate test. By the end of the first campaign, you would learn which feature combinations optimize the product. Future campaigns could further test and validate how well those features work together.
Armed with that knowledge, a second campaign would fix product offers and test only other soft elements such as messaging and incentives. Again, with this simplified scope, all test elements can easily fit into a single multivariate test, providing insights into which specific message and incentive gets the best response.
A third campaign would focus on the channel or segment or both. Some marketers might ask whether the optimal product and message will vary by channels or segments. Although that’s a legitimate question, it’s hard to answer in practice, because the market typically is too small to test within channels or within segments (multivariate in-market tests typically require a sample of more than 50,000)—not to mention the operational challenge of implementing different strategies by channel or segment.
By the end of the third campaign, marketers would learn a lot, yet the best is still to come: You can revisit the elements tested and validate them in new combinations. For instance, you can retest product features and fix all other elements at the previously determined optimal level. Now, instead of testing main effects learned from the first campaign, you can focus on feature interactions, meaning how well each feature works when paired with another feature. Since all other features have been optimized, you can work with a very short list and introduce small variations to test the most probable interactions that have a high potential to be significant. This also helps to validate whether the product optimizations in the first campaign have been successful.
By running these four multivariate campaigns sequentially, marketers achieve the goal of testing everything they wanted and optimizing all the elements. Since each campaign is a multivariate test, insights emerge from making cell comparisons within campaigns. This makes sample-size usage efficient, as the entire accessible sample can be reused among the campaign series. That’s a big advantage compared with alternative approaches such as A/B testing, where marketers must use a fresh sample for each campaign to avoid samples being contaminated by previous campaigns.
Many organizations run multiple in-market tests a year, with digital natives running hundreds. Yet we have observed that many of these tests are suboptimal:
- Some are A/B testing instead of multivariate testing.
- Others have no clear priority for each test, leading to multiple tests duplicating efforts.
- Results and insights don’t get generated soon enough to be incorporated in subsequent tests.
As a result, most companies should consider designing a one-year or multiyear roadmap for a long-term, in-market test-optimization strategy.
In doing so, they should avoid common pitfalls. For each individual campaign, they’ll want to leverage a multivariate test to learn about individual elements more efficiently—rather than splitting a multivariate campaign into a series of A/B tests. Researchers may need to test multiple cells for a single multivariate campaign, and marketers may think they can simply split them into two subsequent multivariate campaigns, each of which has half of the cells. Not so. For a multivariate campaign to work properly, marketers need to test all necessary cells in a single campaign in the same time period, so they can control for all other external factors, and therefore attribute all lever effects solely to the testing components.
With some careful planning, companies can avoid these pitfalls and move to sequential testing. In doing so, they’ll be pleasantly surprised at how many elements and variations they can test and optimize for little or no incremental cost.
June Wu is an expert in Bain & Company's Global Advanced Analytics Group.