In my previous post in this series on essential concepts, “What is ‘A Good Forecast’”, I discussed the basic effort to discover the most likely future in a demand planning scenario. I defined a good forecast as one that is unbiased and as accurate as possible. But I also cautioned that, depending on the stability or volatility of the data we have to work with, there may still be some inaccuracy in even a good forecast. The key is to have an understanding of how much.
This topic, managing uncertainty, is the subject of post by my colleague Tom Willemain, “The Average is not the Answer”. His post lays out the theory for responsibly confronting the limits of our predictive ability. It’s important to understand how this actually works.
As I briefly touched on at the end of my previous post, our approach begins with something called a “sliding simulation”. We estimate how accurately we are predicting the future by using our forecasting techniques on an older portion of history, excluding the most recent data. We can then compare what we would have predicted for the recent past with our actual real world information about what happened. This is a reliable method to estimate how closely we are predicting future demand.
Safety stock, a carefully measured buffer in inventory level we stock above our prediction of most likely demand, is derived from the estimate of forecast error coming out of the “sliding simulation”. This approach to dealing with the accuracy of our forecasts efficiently balances between ignoring the threat of the unpredictable and costly overcompensation.
In more technical detail: the forecasts errors that are estimated by this sliding simulation process indicate the level of uncertainty. We use these errors to estimate the standard deviation of the forecasts. Now, with regular demand, we can assume the forecasts (which are estimates of future behavior) are best represented by a bell-shaped probability distribution—what statisticians call the “normal distribution”. The center of that distribution is our point forecast. The width of that distribution is the standard deviation of the “sliding simulation” forecast from the known actual values—we obtain this directly from our forecast error estimates.
Once we know the specific bell shaped curve associated with the forecast, we can easily estimate the safety stock buffer that is needed. The only input from us is the “service level” that is desired, and the safety stock at that service level can be ascertained. (The service level is essentially a measure of how confident we need to be in our inventory stocking levels, with increasing confidence requiring corresponding expenditures on extra inventory.) Notice, we are assuming that the correct distribution to use is the normal distribution. This is correct for most demand series where you have regular demand per period. It fails when demand is sporadic or intermittent.
In the next piece in this series, I’ll discuss how Smart Forecasts deals with estimating safety stock in those cases of intermittent demand, when the assumption of normality is incorrect.
Nelson Hartunian, PhD, co-founded Smart Software, formerly served as President, and currently oversees it as Chairman of the Board. He has, at various times, headed software development, sales and customer service.
Related Posts

You Need to Team up with the Algorithms
This article is about the real power that comes from the collaboration between you and our software that happens at your fingertips. We often write about the software itself and what goes on “under the hood”. This time, the subject is how you should best team up with the software.

Rethinking forecast accuracy: A shift from accuracy to error metrics
Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.

Every Forecasting Model is Good for What it is Designed for
With so much hype around new Machine Learning (ML) and probabilistic forecasting methods, the traditional “extrapolative” or “time series” statistical forecasting methods seem to be getting the cold shoulder. However, it is worth remembering that these traditional techniques (such as single and double exponential smoothing, linear and simple moving averaging, and Winters models for seasonal items) often work quite well for higher volume data. Every method is good for what it was designed to do. Just apply each appropriately, as in don’t bring a knife to a gunfight and don’t use a jackhammer when a simple hand hammer will do.