The Smart Forecaster
Pursuing best practices in demand planning,
forecasting and inventory optimization
Tremendous cost-saving efficiencies can result from optimizing inventory stocking levels using the best predictions of future demand. Familiarity with forecasting basics is an important part of being effective with the software tools designed to exploit this efficiency. This concise introduction (the first in a short series of blog posts) offers the busy professional a primer in the basic ideas you need to bring to bear on forecasting. How do you evaluate your forecasting efforts, and how reliable are the results?
A good forecast is “unbiased.” It correctly captures predictable structure in the demand history, including: trend (a regular increase or decrease in demand); seasonality (cyclical variation); special events (e.g. sales promotions) that could impact demand or have a cannibalization effect on other items; and other, macroeconomic events.
By “unbiased,” we mean that the estimated forecast is not projecting too high or too low; the actual demand is equally likely to be above or below predicted demand. Think of the forecast as your best guess of what could happen in the future. If that forecast is “unbiased,” the overall picture will show that measures of actual future demand will “bracket” the forecasts—distributed in balance above and below predictions by the equal odds.
You can think of this as if you are an artillery officer and your job is to destroy a target with your cannon. You aim your cannon (“the forecast”) and then shoot and watch the shells fall. If you aimed the cannon correctly (producing an “unbiased” forecast), those shells will “bracket” the target; some shells will fall in front and some shells fall behind, but some shells will hit the target. The falling shells can be thought of as the “actual demand” that will occur in the future. If you forecasted well (aimed your cannon well), then those actuals will bracket the forecasts, falling equally above and below the forecast.
Once you have obtained an “unbiased” forecast (in other words, you aimed your cannon correctly), the question is: how accurate was your forecast? Using the artillery example, how wide is the range around the target in which your shells are falling? You want to have as narrow a range as possible. A good forecast will be one with the minimal possible “spread” around the target.
However, just because the actuals are falling widely around the forecast does not mean you have a bad forecast. It may merely indicate that you have very “volatile” demand history. Again, using the artillery example, if you are starting to shoot in a hurricane, you should expect the shells to fall around the target with a wide error.
Your goal is to obtain as accurate a forecast as is possible with the data you have. If that data is very volatile (you’re shooting in a hurricane), then you should expect a large error. If your data is stable, then you should expect a small error and your actuals will fall close to the forecast—you’re shooting on a clear day!
So that you can understand both the usefulness of your forecasts and the degree of caution appropriate when applying them, you need to be able to review and measure how well your forecast is doing. How well is it estimating what actually occurs? SmartForecasts does this automatically by running its “sliding simulation” through the history. It simulates “forecasts” that could have occurred in the past. An older part of the history, without the most recent numbers, is isolated and used to build forecasts. Because these forecasts then “predict” what might happen in the more recent past—a period for which you already have actual demand data—the forecasts can be compared to the real recent history.
In this manner, SmartForecasts can empirically compute the actual forecast error—and those errors are needed to properly estimate safety stock. Safety stock is the amount of extra stock you need to carry in order to account for the anticipated error in your forecasts. In a subsequent essay, I’ll discuss how we use our estimated forecasts error (via the SmartForecasts sliding simulation) to correctly estimate safety stocks.
Nelson Hartunian, PhD, co-founded Smart Software, formerly served as President, and currently oversees it as Chairman of the Board. He has, at various times, headed software development, sales and customer service.
This article is about the real power that comes from the collaboration between you and our software that happens at your fingertips. We often write about the software itself and what goes on “under the hood”. This time, the subject is how you should best team up with the software.
Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.
With so much hype around new Machine Learning (ML) and probabilistic forecasting methods, the traditional “extrapolative” or “time series” statistical forecasting methods seem to be getting the cold shoulder. However, it is worth remembering that these traditional techniques (such as single and double exponential smoothing, linear and simple moving averaging, and Winters models for seasonal items) often work quite well for higher volume data. Every method is good for what it was designed to do. Just apply each appropriately, as in don’t bring a knife to a gunfight and don’t use a jackhammer when a simple hand hammer will do.