Six Demand Planning Best Practices You Should Think Twice About

Every field, including forecasting, accumulates folk wisdom that eventually starts masquerading as “best practices.”  These best practices are often wise, at least in part, but they often lack context and may not be appropriate for certain customers, industries, or business situations.  There is often a catch, a “Yes, but”. This note is about six usually true forecasting precepts that nevertheless do have their caveats.

 

  1. Organize your company around a one-number forecast. This sounds sensible: it’s good to have a shared vision. But each part of the company will have its own idea about which number is the number. Finance may want quarterly revenue, Marketing may want web site visits, Sales may want churn, Maintenance may want mean time to failure. For that matter, each unit probably has a handful of key metrics. You don’t need a slogan – you need to get your job done.

 

  1. Incorporate business knowledge into a collaborative forecasting process. This is a good general rule, but if your collaborative process is flawed, messing with a statistical forecast via management overrides can decrease accuracy. You don’t need a slogan – you need to measure and compare the accuracy of any and all methods and go with the winners.

 

  1. Forecast using causal modeling. Extrapolative forecasting methods take no account of the underlying forces driving your sales, they just work with the results. Causal modeling takes you deeper into the fundamental drivers and can improve both accuracy and insight. However, causal models (implemented through regression analysis) can be less accurate, especially when they require forecasts of the drivers (“predictions of the predictors”) rather than simply plugging in recorded values of lagged predictor variables. You don’t need a slogan: You need a head-to-head comparison.

 

  1. Forecast demand instead of shipments. Demand is what you really want, but “composing a demand signal” can be tricky: what do you do with internal transfers? One-off’s?  Lost sales? Furthermore, demand data can be manipulated.  For example, if customers intentionally don’t place orders or try to game their orders by ordering too far in advance, then order history won’t be better than shipment history.  At least with shipment history, it’s accurate:  You know what you shipped. Forecasts of shipments are not forecasts of  “demand”, but they are a solid starting point.

 

  1. Use Machine Learning methods. First, “Machine learning” is an elastic concept that includes an ever-growing set of alternatives. Under the hood of many ML advertised models is just an auto-pick an extrapolative forecast method (i.e., best fit) which while great at forecasting normal demand, has been around since the 1980’s (Smart Software was the first company to release an auto-pick method for the PC).   ML models are data hogs that require larger data sets than you may have available. Properly choosing then training an ML model requires a level of statistical expertise that is uncommon in many manufacturing and distribution businesses. You might want to find somebody to hold your hand before you start playing this game.

 

  1. Removing outliers creates better forecasts. While it is true that very unusual spikes or drops in demand will mask underlying demand patterns such as trend or seasonality, it isn’t always true that you should remove the spikes. Often these demand surges reflect the uncertainty that can randomly interfere with your business and thus need to be accounted for.  Removing this type of data from your demand forecast model might make the data more predictable on paper but will leave you surprised when it happens again. So, be careful about removing outliers, especially en masse.

 

 

 

 

The Automatic Forecasting Feature

Automatic forecasting is the most popular and most used feature of SmartForecasts and Smart Demand Planner. Creating Automatic forecasts is easy. But, the simplicity of Automatic Forecasting masks a powerful interaction of a number of highly effective methods of forecasting. In this blog, we discuss some of the theory behind this core feature. We focus on Automatic forecasting, in part because of its popularity and in part because many other forecasting methods produce similar outputs. Knowledge of Automatic forecasting immediately carries over to Simple Moving Average, Linear Moving Average, Single Exponential Smoothing, Double Exponential Smoothing, Winters’ Exponential Smoothing, and Promo forecasting.

 

Forecasting tournament

Automatic forecasting works by conducting a tournament among a set of competing methods. Because personal computers and cloud computing are fast, and because we have coded very efficient algorithms into the SmartForecasts’ Automatic forecasting engine, it is practical to take a purely empirical approach to deciding which extrapolative forecasting method to use. This means that you can afford to try out a number of approaches and then retain the one that does best at forecasting the particular data series at hand. SmartForecasts fully automates this process for you by trying the different forecasting methods in a simulated forecasting tournament. The winner of the tournament is the method that comes closest to  predicting new data values from old. Accuracy is measured by average absolute error (that is, the average error, ignoring any minus signs). The average is computed over a set of forecasts, each using a portion of the data, in a process known as sliding simulation.

 

Sliding simulation

The sliding simulation sweeps repeatedly through ever-longer portions of the historical data, in each case forecasting ahead the desired number of periods in your forecast horizon. Suppose there are 36 historical data values and you need to forecast six periods ahead. Imagine that you want to assess the forecast accuracy of some particular method, say a moving average of four observations, on the data series at hand.

At one point in the sliding simulation, the first 24 points (only) are used to forecast the 25th through 30th historical data values, which we temporarily regard as unknown. We say that points 25-30 are “held out” of the analysis. Computing the absolute values of the differences between the six forecasts and the corresponding actual historical values provides one instance each of a 1-step, 2-step, 3-step, 4-step, 5-step, and 6-step ahead absolute forecast error. Repeating this process using the first 25 points provides more instances of 1-step, 2-step, 3-step ahead errors, and so on. The average over all of the absolute error estimates obtained this way provides a single-number summary of accuracy.

 

Methods used in Automatic forecasting

Normally, there are six extrapolative forecasting methods competing in the Automatic forecasting tournament:

  • Simple moving average
  • Linear moving average
  • Single exponential smoothing
  • Double exponential smoothing
  • Additive version of Winters’ exponential smoothing
  • Multiplicative version of Winters’ exponential smoothing

 

The latter two methods are appropriate for seasonal series; however, they are automatically excluded from the tournament if there are fewer than two full seasonal cycles of data (for example, fewer than 24 periods of monthly data or eight periods of quarterly data).

These six classical, smoothing-based methods have proven themselves to be easy to understand, easy to compute and accurate. You can exclude any of these methods from the tournament if you have a preference for some of the competitors and not others.

 

 

 

 

6 Observations About Successful Demand Forecasting Processes

1. Forecasting is an art that requires a mix of professional judgment and objective statistical analysis. Successful demand forecasts require a baseline prediction leveraging statistical forecasting methods. Once established, the process can focus on how best to adjust statistical forecasts based on your own insights and business knowledge.

2. The forecasting process is usually iterative. You may need to make several refinements of your initial forecast before you are satisfied. It is important to be able to generate and compare alternative forecasts quickly and easily. Tracking accuracy of these forecasts over time, including alternatives that were not used, helps inform and improve the process.

3. The credibility of forecasts depends heavily on graphical comparisons with historical data.  A picture is worth a thousand words, so always display forecasts via instantly available graphical displays with supporting numerical reports.

4. One of the major technical tasks in forecasting is to match the choice of forecasting technique to the nature of the data. Effective demand forecasting processes employ capabilities that identify the right method to use.  Features of a data series like trend, seasonality or abrupt shifts in level suggest certain techniques instead of others. An automatic selection, which selects and uses the appropriate forecasting method automatically, saves time and ensures your baseline forecast is as accurate as possible.

5. Successful demand forecasting processes work in tandem with other business processes.   For example, forecasting can be an essential first step in financial analysis.  In addition, accurate sales and product demand forecasts are fundamental inputs to a manufacturing company’s production planning and inventory control processes.

6. A good planning process recognizes that forecasts are never exactly correct. Because some error creeps into even the best forecasting process, one of the most useful supplements to a forecast are honest estimates of its margin of error and forecast bias.

 

 

 

 

Don’t Blame Excess Stock on “Bad” Sales / Customer Forecasts

Sales forecasts are often inaccurate simply because the sales team is forced to give a number even though they don’t really know what their customer demand is going to be. Let the sales teams sell.  Don’t bother playing the game of feigning acceptance of these forecasts when both sides (sales and supply chain) know it is often nothing more than a WAG.   Do this instead:

  • Accept demand variability as a fact of life. Develop a planning process that does a better job account for demand variability.
  • Agree on a level of stockout risk that is acceptable across groups of items.
  • Once the stockout risk is agreed to, use software to generate an accurate estimate of the safety stock needed to counter the demand variability.
  • Get buy-in. Customers must be willing to pay a higher price per unit for you to deliver extremely high service levels.  Salespeople must accept that certain items are more likely to have backorders if they prioritize inventory investment on other items.
  • Using a consensus #safetystock process ensures you are properly buffering and setting the right expectations with sales, customers, finance, and supply chain.

 

When you do this, you free all parties from having to play the prediction game they were not equipped to play in the first place. You’ll get better results, such as higher service levels with lower inventory costs. And with much less finger-pointing.

 

 

 

 

What makes a probabilistic forecast?

What’s all the hoopla around the term “probabilistic forecasting?” Is it just a more recent marketing term some software vendors and consultants have coined to feign innovation? Is there any real tangible difference compared to predecessor “best fit” techniques?  Aren’t all forecasts probabilistic anyway?

To answer this question, it is helpful to think about what the forecast really is telling you in terms of probabilities.  A “good” forecast should be unbiased and therefore yield a 50/50 probability being higher or lower than the actual.  A “bad” forecast will build in subjective buffers (or artificially depress the forecast) and result in demand that is either biased high or low.  Consider a salesperson that intentionally reduces their forecast by not reporting sales they expect to close to be “conservative.” Their forecasts will have negative forecast bias as actuals will nearly always be higher than what they predicted.   On the other hand, consider a customer that provides an inflated forecast to their manufacturer.  Worried about stockouts, they overestimate demand to ensure their supply.  Their forecast will have a positive bias as actuals will nearly always be lower than what they predicted. 

These types of one-number forecasts described above are problematic.  We refer to these predictions as “point forecasts” since they represent one point (or a series of points over time) on a plot of what might happen in the future.   They don’t provide a complete picture because to make effective business decisions such as determining how much inventory to stock or the number of employees to be available to support demand requires detailed information on how much lower or higher the actual will be!  In other words, you need the probabilities for each possible outcome that might occur.  So, by itself, the point forecast isn’t probabilistic one.   

To get a probabilistic forecast, you need to know the distribution of possible demands around that forecast.  Once you compute this, the forecast becomes “probabilistic.”  How forecasting systems and practitioners such as demand planners, inventory analysts, material managers, and CFOs determine these probabilities is the heart of the question: “what makes a forecast probabilistic?”     

Normal Distributions
Most forecasts and the systems/software that produce them start with a prediction of demand.  Then they figure out the range of possible demands around that forecast by making incorrect theoretical assumptions about the distribution.  If you’ve ever used a “confidence interval” in your forecasting software, this is based on a probability distribution around the forecast.  The way this range of demand is determined is to assume a particular type of distribution.  Most often this means assuming a bell shaped, otherwise known as a normal distribution.  When demand is intermittent, some inventory optimization and demand forecasting systems may assume the demand is Poisson shaped. 

After creating the forecast, the assumed distribution is slapped around the demand forecast and you then have your estimate of probabilities for every possible demand – i.e., a “probabilistic forecast.”  These estimates of demand and associated probabilities can then be used to determine extreme values or anything in between if desired.  The extreme values at the upper percentiles of the distribution (i.e., 92%, 95%, 99%, etc.) are most often used as inputs to inventory control models.  For example, reorder points for critical spare parts in an electrical utility might be planned based on a 99.5% service level or even higher.  While a non-critical service part might be planned at an 85% or 90% service level.

The problem with making assumptions about the distribution is that you’ll get these probabilities wrong.  For example, if the demand isn’t normally distributed but you are forcing a bell shaped/normal curve on the forecast then how can then the probabilities will be incorrect.  Specifically, you might want to know the level of inventory needed to achieve a 99% probability of not running out of stock and the normal distribution will tell you to stock 200 units.  But when compared to the actual demand, you come to find out that 200 units only filled demand entirely in 40/50 observations.  So, instead of getting a 99% service level you only achieved an 80% service level!  This is a gigantic miss resulting from trying to fit a square peg into a round hole.  The miss would have led you to take an incorrect inventory reduction.

Empirically Estimated Distributions are Smart
To produce a smart (read accurate) probabilistic forecast you need to first estimate the distribution of demand empirically without any naïve assumptions about the shape of the distribution.  Smart Software does this by running tens of thousands of simulated demand and lead time scenarios.  Our solution leverages patented techniques that incorporate Monte Carlo simulation, Statistical Bootstrapping, and other methods.  The scenarios are designed to simulate real life uncertainty and randomness of both demand and lead times.  Actual historical observations are utilized as the primary inputs, but the solution will give you the option of simulating from non-observed values as well.  For example, just because 100 units was the peak historical demand, that doesn’t mean you are guaranteed to peak out at 100 in the future.  After the scenarios are done you will know the exact probability for each outcome. The “point” forecast then becomes the center of that distribution.  Each future period over time is expressed in terms of the probability distribution associated with that period.

Leaders in Probabilistic Forecasting
Smart Software, Inc. was the first company to ever introduce statistical bootstrapping as part of a commercially available demand forecasting software system twenty years ago.  We were awarded a US patent at the time for it and named a finalist in the APICS Corporate Awards of Excellence for Technological Innovation.  Our NSF Sponsored research that led to this and other discoveries were instrumental in advancing forecasting and inventory optimization.    We are committed to ongoing innovation, and you can find further information about our most recent patent here.