The Smart Forecaster
Pursuing best practices in demand planning, forecasting and inventory optimization
Tremendous cost-saving efficiencies can result from optimizing inventory stocking levels using the best predictions of future demand. Familiarity with forecasting basics is an important part of being effective with the software tools designed to exploit this efficiency. This concise introduction (the first in a short series of blog posts) offers the busy professional a primer in the basic ideas you need to bring to bear on forecasting. How do you evaluate your forecasting efforts, and how reliable are the results?
A good forecast is “unbiased.” It correctly captures predictable structure in the demand history, including: trend (a regular increase or decrease in demand); seasonality (cyclical variation); special events (e.g. sales promotions) that could impact demand or have a cannibalization effect on other items; and other, macroeconomic events.
By “unbiased,” we mean that the estimated forecast is not projecting too high or too low; the actual demand is equally likely to be above or below predicted demand. Think of the forecast as your best guess of what could happen in the future. If that forecast is “unbiased,” the overall picture will show that measures of actual future demand will “bracket” the forecasts—distributed in balance above and below predictions by the equal odds.
You can think of this as if you are an artillery officer and your job is to destroy a target with your cannon. You aim your cannon (“the forecast”) and then shoot and watch the shells fall. If you aimed the cannon correctly (producing an “unbiased” forecast), those shells will “bracket” the target; some shells will fall in front and some shells fall behind, but some shells will hit the target. The falling shells can be thought of as the “actual demand” that will occur in the future. If you forecasted well (aimed your cannon well), then those actuals will bracket the forecasts, falling equally above and below the forecast.
Once you have obtained an “unbiased” forecast (in other words, you aimed your cannon correctly), the question is: how accurate was your forecast? Using the artillery example, how wide is the range around the target in which your shells are falling? You want to have as narrow a range as possible. A good forecast will be one with the minimal possible “spread” around the target.
However, just because the actuals are falling widely around the forecast does not mean you have a bad forecast. It may merely indicate that you have very “volatile” demand history. Again, using the artillery example, if you are starting to shoot in a hurricane, you should expect the shells to fall around the target with a wide error.
Your goal is to obtain as accurate a forecast as is possible with the data you have. If that data is very volatile (you’re shooting in a hurricane), then you should expect a large error. If your data is stable, then you should expect a small error and your actuals will fall close to the forecast—you’re shooting on a clear day!
So that you can understand both the usefulness of your forecasts and the degree of caution appropriate when applying them, you need to be able to review and measure how well your forecast is doing. How well is it estimating what actually occurs? SmartForecasts does this automatically by running its “sliding simulation” through the history. It simulates “forecasts” that could have occurred in the past. An older part of the history, without the most recent numbers, is isolated and used to build forecasts. Because these forecasts then “predict” what might happen in the more recent past—a period for which you already have actual demand data—the forecasts can be compared to the real recent history.
In this manner, SmartForecasts can empirically compute the actual forecast error—and those errors are needed to properly estimate safety stock. Safety stock is the amount of extra stock you need to carry in order to account for the anticipated error in your forecasts. In a subsequent essay, I’ll discuss how we use our estimated forecasts error (via the SmartForecasts sliding simulation) to correctly estimate safety stocks.
Nelson Hartunian, PhD, co-founded Smart Software, formerly served as President, and currently oversees it as Chairman of the Board. He has, at various times, headed software development, sales and customer service.
Service level and fill rate are two important metrics for measuring how effectively customer demand is satisfied. These terms are often confused and understanding the differences can help improve your inventory planning process. This video blog (Vlog) helps illustrate the difference with a simple example using Excel
If there is ‘no try, only do or not do’, then the history of inventory management has seen a lot of not do. So what special powers are needed to turn an ordinary inventory professional into an Inventory Optimizer?”
Traditional forecasting accuracy metrics aren’t applicable when the goal is to optimize inventory. It’s “service level accuracy” that matters because just setting a service target doesn’t mean you’ll actually achieve it. Poor accuracy here has extremely costly implications. The right way to measure accuracy for inventory planning is to focus on the accuracy of the service level projection. This blog explains why and details how to calculate the metric.