This article originally appeared on HBR.org.
Covid-19 has shattered the demand forecasts that guide retailers and suppliers of consumer goods and services in figuring out how much to order or manufacture, where to stock inventory, and how much to advertise or discount. Early on during the pandemic, sudden lockdowns and a shift to working from home caused panic buying of many food items and household goods. Some items sold out while others languished on shelves.
Uncertainty persists today along several dimensions. Certain products, such as paper towels and canned vegetables, remain in short supply. Food sellers are stockpiling months’ rather than weeks’ worth of some staples to better prepare for this winter, when there may be resurgence of cases and people would be expected to hunker down at home. In turn, that could throw forecasts for holiday and seasonal purchasing off kilter. Changes to unemployment benefits, volatile investment markets, and even social unrest around the presidential election and racial issues could further jolt demand.
We’re also seeing deeper changes in consumer attitudes and behaviors. Consumers generally have sought ways to reduce risk, reduce anxiety, and obtain a sense of belonging. But there is a broadening spectrum of attitudes and behaviors toward the disease and physical activities based on age, income, and political orientation. Understanding these new patterns and regrouping customers for pandemic times could improve forecasts and should be high priorities for many firms.
Compounding these challenges, when forecasts break down, as they have during the coronavirus, managers tend to revert to their gut instincts. This makes forecast accuracy worse by further confusing noisy data with bias.
Such bias takes many different forms. One common form that we see in our demand-planning work among consumer packaged goods companies consists of trying to please retailers by ensuring an ample supply of items, regardless of whether those items are expected to have strong sales. Field reps don’t want to walk into a store and hear complaints about stock-outs, which cost sales today and damage relationships and thus market share tomorrow. But swinging too far in the other direction – hedging risk by limiting inventories excessively – is expensive, both directly, because it costs sales today, and indirectly, because it damages channel relationships and loses market share tomorrow.
Managers also fall into the “what you see is all there is” trap, as in when they look only at their own region’s response to the pandemic. Some companies with a strong presence in the southern US, for instance, initially thought that the early pandemic trends surfacing in New York would have no bearing on them.
Rather than abandon modeling, they should model in a different way.
Find alternative data sets
A more reliable way to navigate these rough seas is to seek out alternative data sets — for instance, by using a blend of simpler models and digging deeper for non-obvious, sometimes unstructured “dark matter” data. Such data often lives in the minds of the people who apply forecasts (think of employees’ knowledge about local market events and other conditions) and can be structured and represented in these models.
Start with data sets. Many forecast models rely on previous data on sales over time. In relatively stable times, this data may portray an accurate assessment of the coming season. But when a pandemic hits, familiar past patterns become irrelevant, whereas analogous events could hold more predictive power.
Analogies might consist of past economic shocks, such as the dotcom crash; past natural disasters, especially hurricanes, which disrupt supply chains over long periods; or regions that have recovered from a case surge. The data exhaust from analogous events — for example, how long it took demand and supply to recover in different countries or cities — can help illuminate the near future during and after Covid-19.
Analogous data won’t tell the complete story, of course. Companies also need near-real-time data that tracks consumers’ current behaviors and attitudes. To that end, consumer packaged-goods companies, which often lack timely, accurate point-of-sale data, will want to convince retailers to share their firsthand data more readily, or they could build out direct-to-consumer channels. Even this data may not be quickly available from other regions. In that case, using sales through e-commerce channels, online search patterns, smartphone mobility data, and social-media sentiment analysis can all provide useful signals of consumer momentum.
One global food firm took this approach a few months after Covid-19 hit to sense demand in unmeasured channels such as restaurants, taverns, and hotels — something it had never done. The company uses anonymous location data from mobile phones, and it has identified two dozen predictive variables across seven product categories. These data sets flow into a tool that simulates different scenarios depending on vaccine availability, lockdown policies, economic stimulus, and other factors in each country. It also built a “panic index” to track consumer sentiment from social media feeds. So far, the tool is performing more accurately than previous demand estimates from frontline sales reps.
Tap local knowledge
Collected data should, however, include local knowledge. A baked-goods company we worked with previously used an algorithm that assumed demand would rise during certain national holidays. But in talking with field reps, we discovered that other events — namely, state fairs, fishing tournaments, and minor-league baseball games — were more important in some markets. Folding that information into the company’s machine learning–based model greatly improved forecast accuracy. Better accuracy, in turn, reduced store returns, product waste, and truck driver time on orders. Since the pandemic, this company has realized an improvement in earnings before taxes of more than $75 million through reduced over-ordering and stock-outs.
Beyond local knowledge, companies may also seek judgment from experts, including epidemiologists in the case of a pandemic, or senior advisers and trade associations for industry perspectives. Using the Delphi method — that is, aggregating the opinions of a panel of experts — companies can build in expert judgment as part of the data sets that go into building the models, rather than just to adjust the output of the model after the fact.
Embrace ensemble modeling
Once a company has more relevant data, the next task is to improve its modeling. Here it pays to think like a hurricane forecaster: Under uncertain, dynamic conditions, blending many simple models often works better than using one complex model, which may be more brittle under these conditions.
Ensemble modeling combines predictions from different models to suggest a point estimate, or a reasonable range, when the underlying data for any single model is unstable. Hurricane forecasters plot the paths predicted by several models together on a chart so that users get a good sense of the storm path’s central tendency.
Take the introduction of a new product during the pandemic. One model might use a simple moving average of sales. Another might incorporate past sales data from a product in the same family to spot patterns for that product during unstable periods. Yet another model might represent what’s happening in other, similar stores. Because each of the individual models covers different demand characteristics, if they all point in a certain direction, confidence in that path rises. If they don’t point the same way, the simplicity and transparency of each model makes it easier to understand why they’re each pointing as they are.
Early on during the pandemic, a healthcare company struggled to accurately predict call volumes to its contact center, resulting in excess labor costs due to overstaffing. The company delved into the sources of errors and developed a set of variables that better predicted when and why people would call. One variable was what happened in other countries that were hit by Covid-19 earlier; another was preauthorization requests. The new predictive model consisted of a collection of simpler and more transparent models than the model used before the pandemic. It used a mix of new data and input from experts. As a result, the company substantially reduced its forecast error in weeks.
Test, test, and test again
With consumer markets still buffeted by cancellations, surges, surprises, and regional variations, managers need a disciplined process for validating results by testing quickly and often. Simple approaches such as A/B testing offer both speed and flexibility.
For example, quarantined adults aged 65 and older who never considered buying groceries online now are increasingly turning to these services. A grocer could set up an A/B test to measure which marketing messages are most effective with that new demographic. A/B testing also can assess the effectiveness of different marketing channels, comparing the performance of a 50-50 split of streaming video and social-media ads with different ad mixes.
Testing also has a logistical side benefit. For instance, a company may want to determine the effect of doubling or cutting in half a certain type of media advertising in metro areas that experience a new wave of infections. Executing the test doesn’t just yield analytic insights, it also helps assess whether doing these things is feasible, how the company would buy the extra media, or whether it has the contractual flexibility to cut back.
As managers of consumer-facing companies saw their forecast accuracy degrade during 2020, responses ranged from cutting back on production and marketing until the dust settles to simple and linear adjustments across the board. By contrast, a smaller group that pursued new data sets, simulations, and model development are having initial success in better predicting demand. That helps them rein in costs while not sacrificing growth, and it puts them in a stronger position once markets stabilize. With volatility likely to persist through 2021, many more companies should follow their lead.