The Smart Forecaster
Pursuing best practices in demand planning,
forecasting and inventory optimization
A new metric we call the “Attention Index” will help forecasters identify situations where “data behaving badly” can distort automatic statistical forecasts (see adjacent poem). It quickly identifies those items most likely to require forecast overrides—providing a more efficient way to put business experience and other human intelligence to work maximizing the accuracy of forecasts. How does it work?
Classical forecasting methods, such as the various flavors of exponential smoothing and moving averages, insist on a leap of faith. They require that we trust present conditions to persist into the future. If present conditions do persist, then it is sensible to use these extrapolative methods—methods which quantify the current level, trend, seasonality and “noise” of a time series and project them into the future.
But if they do not persist, extrapolative methods can get us into trouble. What had been going up might suddenly be going down. What used to be centered around one level might suddenly jump to another. Or something really odd might happen that is entirely out of pattern. In these surprising circumstances, forecast accuracy deteriorates, inventory calculations go wrong and general unhappiness ensues.
One way to cope with this problem is to rely on more complex forecasting models that account for external factors that drive the variable being forecasted. For instance, sales promotions attempt to disrupt buying patterns and move them in a positive direction, so including promotion activity in the forecasting process can improve sales forecasting. Sometimes macroeconomic indicators, such as housing starts or inflation rates, can be used to improve forecast accuracy. But more complex models require more data and more expertise, and they may not be useful for some problems—such as managing parts or subsystems, rather than finished goods.
If one is stuck using simple extrapolative methods, it is useful to have a way to flag items that will be difficult to forecast. This is the Attention Index. As the name suggests, items to be forecast with a high Attention Index require special handling—at least a review, and usually some sort of forecast adjustment.
The Attention Index detects three types of problems:
An outlier in the demand history of an item.
An abrupt change in the level of an item.
An abrupt change in the trend of an item.
Using software like SmartForecasts™, the forecaster can deal with an outlier by replacing it with a more typical value.
An abrupt change in level or trend can be dealt with by omitting, from the forecasting calculations, all data from before the “rupture” in the demand pattern—assuming that the item has switched into a new regime that renders the older data irrelevant.
While no index is perfect, the Attention Index does a good job of focusing attention on the most problematic demand histories. This is demonstrated in the two figures below, which were produced with data from the M3 Competition, well known in the forecasting world. Figure 1 shows the 20 items (out of the contest’s 3,003) with the highest Attention Index scores; all of these have grotesque outliers and ruptures. Figure 2 shows the 20 items with the lowest Attention Index scores; most (but not all) of the items with low scores have relatively benign patterns.
If you have thousands of items to forecast, the new Attention Index will be very useful for focusing your attention on those items most likely to be problematic.
Thomas Willemain, PhD, co-founded Smart Software and currently serves as Senior Vice President for Research. Dr. Willemain also serves as Professor Emeritus of Industrial and Systems Engineering at Rensselaer Polytechnic Institute and as a member of the research staff at the Center for Computing Sciences, Institute for Defense Analyses.
Fifteen questions that reveal how forecasts are computed in your company
In a recent LinkedIn post, I detailed four questions that, when answered, will reveal how forecasts are being used in your business. In this article, we’ve listed questions you can ask that will reveal how forecasts are created.
How to interpret and manipulate forecast results with different forecast methods
This blog explains how each forecasting model works using time plots of historical and forecast data. It outlines how to go about choosing which model to use. The examples below show the same history, in red, forecasted with each method, in dark green, compared to the Smart-chosen winning method, in light green.
What to do when a statistical forecast doesn’t make sense
Sometimes a statistical forecast just doesn’t make sense. Every forecaster has been there. They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history. When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process.