Rethinking forecast accuracy: A shift from accuracy to error metrics

Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.

Accuracy is represented as a percentage between zero and 100, while error percentages start at zero but have no upper limit. Reports of MAPE (mean absolute percent error) or other error metrics can be titled “forecast accuracy” reports, which blurs the distinction.  So, you may want to know how to convert from the error viewpoint to the accuracy viewpoint that your company espouses.  This blog describes how with some examples.

Accuracy metrics are computed such that when the actual equals the forecast then the accuracy is 100% and when the forecast is either double or half of the actual, then accuracy is 0%. Reports that compare the forecast to the actual often include the following:

  • The Actual
  • The Forecast
  • Unit Error = Forecast – Actual
  • Absolute Error = Absolute Value of Unit Error
  • Absolute % Error = Abs Error / Actual, as a %
  • Accuracy % = 100% – Absolute % Error

Look at a couple examples that illustrate the difference in the approaches. Say the Actual = 8 and the forecast is 10.

Unit Error is 10 – 8 = 2

Absolute % Error = 2 / 8, as a % = 0.25 * 100 = 25%

Accuracy = 100% – 25% = 75%.

Now let’s say the actual is 8 and the forecast is 24.

Unit Error is 24– 8 = 16

Absolute % Error = 16 / 8 as a % = 2 * 100 = 200%

Accuracy = 100% – 200% = negative is set to 0%.

In the first example, accuracy measurements provide the same information as error measurements since the forecast and actual are already relatively close. But when the error is more than double the actual, accuracy measurements bottom out at zero. It does correctly indicate the forecast was not at all accurate. But the second example is more accurate than a third, where the actual is 8 and the forecast is 200. That’s a distinction a 0 to 100% range of accuracy doesn’t register. In this final example:

Unit Error is 200 – 8 = 192

Absolute % Error = 192 / 8, as a % = 24 * 100 = 2,400%

Accuracy = 100% – 2,400% = negative is set to 0%.

Error metrics continue to provide information on how far the forecast is from the actual and arguably better represent forecast accuracy.

We encourage adopting the error viewpoint. You simply hope for a small error percentage to indicate the forecast was not far from the actual, instead of hoping for a large accuracy percentage to indicate the forecast was close to the actual.  This shift in mindset offers the same insights while eliminating distortions.

 

 

 

 

The Automatic Forecasting Feature

Automatic forecasting is the most popular and most used feature of SmartForecasts and Smart Demand Planner. Creating Automatic forecasts is easy. But, the simplicity of Automatic Forecasting masks a powerful interaction of a number of highly effective methods of forecasting. In this blog, we discuss some of the theory behind this core feature. We focus on Automatic forecasting, in part because of its popularity and in part because many other forecasting methods produce similar outputs. Knowledge of Automatic forecasting immediately carries over to Simple Moving Average, Linear Moving Average, Single Exponential Smoothing, Double Exponential Smoothing, Winters’ Exponential Smoothing, and Promo forecasting.

 

Forecasting tournament

Automatic forecasting works by conducting a tournament among a set of competing methods. Because personal computers and cloud computing are fast, and because we have coded very efficient algorithms into the SmartForecasts’ Automatic forecasting engine, it is practical to take a purely empirical approach to deciding which extrapolative forecasting method to use. This means that you can afford to try out a number of approaches and then retain the one that does best at forecasting the particular data series at hand. SmartForecasts fully automates this process for you by trying the different forecasting methods in a simulated forecasting tournament. The winner of the tournament is the method that comes closest to  predicting new data values from old. Accuracy is measured by average absolute error (that is, the average error, ignoring any minus signs). The average is computed over a set of forecasts, each using a portion of the data, in a process known as sliding simulation.

 

Sliding simulation

The sliding simulation sweeps repeatedly through ever-longer portions of the historical data, in each case forecasting ahead the desired number of periods in your forecast horizon. Suppose there are 36 historical data values and you need to forecast six periods ahead. Imagine that you want to assess the forecast accuracy of some particular method, say a moving average of four observations, on the data series at hand.

At one point in the sliding simulation, the first 24 points (only) are used to forecast the 25th through 30th historical data values, which we temporarily regard as unknown. We say that points 25-30 are “held out” of the analysis. Computing the absolute values of the differences between the six forecasts and the corresponding actual historical values provides one instance each of a 1-step, 2-step, 3-step, 4-step, 5-step, and 6-step ahead absolute forecast error. Repeating this process using the first 25 points provides more instances of 1-step, 2-step, 3-step ahead errors, and so on. The average over all of the absolute error estimates obtained this way provides a single-number summary of accuracy.

 

Methods used in Automatic forecasting

Normally, there are six extrapolative forecasting methods competing in the Automatic forecasting tournament:

  • Simple moving average
  • Linear moving average
  • Single exponential smoothing
  • Double exponential smoothing
  • Additive version of Winters’ exponential smoothing
  • Multiplicative version of Winters’ exponential smoothing

 

The latter two methods are appropriate for seasonal series; however, they are automatically excluded from the tournament if there are fewer than two full seasonal cycles of data (for example, fewer than 24 periods of monthly data or eight periods of quarterly data).

These six classical, smoothing-based methods have proven themselves to be easy to understand, easy to compute and accurate. You can exclude any of these methods from the tournament if you have a preference for some of the competitors and not others.

 

 

 

 

6 Observations About Successful Demand Forecasting Processes

1. Forecasting is an art that requires a mix of professional judgment and objective statistical analysis. Successful demand forecasts require a baseline prediction leveraging statistical forecasting methods. Once established, the process can focus on how best to adjust statistical forecasts based on your own insights and business knowledge.

2. The forecasting process is usually iterative. You may need to make several refinements of your initial forecast before you are satisfied. It is important to be able to generate and compare alternative forecasts quickly and easily. Tracking accuracy of these forecasts over time, including alternatives that were not used, helps inform and improve the process.

3. The credibility of forecasts depends heavily on graphical comparisons with historical data.  A picture is worth a thousand words, so always display forecasts via instantly available graphical displays with supporting numerical reports.

4. One of the major technical tasks in forecasting is to match the choice of forecasting technique to the nature of the data. Effective demand forecasting processes employ capabilities that identify the right method to use.  Features of a data series like trend, seasonality or abrupt shifts in level suggest certain techniques instead of others. An automatic selection, which selects and uses the appropriate forecasting method automatically, saves time and ensures your baseline forecast is as accurate as possible.

5. Successful demand forecasting processes work in tandem with other business processes.   For example, forecasting can be an essential first step in financial analysis.  In addition, accurate sales and product demand forecasts are fundamental inputs to a manufacturing company’s production planning and inventory control processes.

6. A good planning process recognizes that forecasts are never exactly correct. Because some error creeps into even the best forecasting process, one of the most useful supplements to a forecast are honest estimates of its margin of error and forecast bias.

 

 

 

 

Don’t Blame Excess Stock on “Bad” Sales / Customer Forecasts

Sales forecasts are often inaccurate simply because the sales team is forced to give a number even though they don’t really know what their customer demand is going to be. Let the sales teams sell.  Don’t bother playing the game of feigning acceptance of these forecasts when both sides (sales and supply chain) know it is often nothing more than a WAG.   Do this instead:

  • Accept demand variability as a fact of life. Develop a planning process that does a better job account for demand variability.
  • Agree on a level of stockout risk that is acceptable across groups of items.
  • Once the stockout risk is agreed to, use software to generate an accurate estimate of the safety stock needed to counter the demand variability.
  • Get buy-in. Customers must be willing to pay a higher price per unit for you to deliver extremely high service levels.  Salespeople must accept that certain items are more likely to have backorders if they prioritize inventory investment on other items.
  • Using a consensus #safetystock process ensures you are properly buffering and setting the right expectations with sales, customers, finance, and supply chain.

 

When you do this, you free all parties from having to play the prediction game they were not equipped to play in the first place. You’ll get better results, such as higher service levels with lower inventory costs. And with much less finger-pointing.

 

 

 

 

A Practical Guide to Growing a Professional Forecasting Process

Many companies looking to improve their forecasting process don’t know where to start. It can be confusing to contend with learning new statistical methods, making sure data is properly structured and updated, agreeing on who “owns” the forecast, defining what ownership means, and measuring accuracy. Having seen this over forty-plus years of practice, we wrote this blog to outline the core focus and to encourage you to keep it simple early on.

1. Objectivity. First, understand and communicate that the Demand Planning and Forecasting process is an exercise in objectivity. The focus is on getting inputs from various sources (stakeholders, customers, functional managers, databases, suppliers, etc.) and deciding whether those inputs add value. For example, if you override a statistical forecast and add 20% to the projection, you should not just assume that you automatically got it right. Instead, be objective and check whether that override increased or decreased forecast accuracy. If you find that your overrides made things worse, you’ve gained something: This informs the process and you know to better scrutinize override decisions in the future.

2.  Teamwork. Recognize that forecasting and demand planning are team sports. Agree on who will captain the team. The captain is responsible for creating the baseline statistical forecasts and supervising the demand planning process. But results depend on everyone on the team making positive contributions, providing data, suggesting alternative methodologies, questioning assumptions, and executing recommended actions. The final results are owned by the company and every single stakeholder.

3. Measurement. Don’t fixate on industry forecast accuracy benchmarks. Every SKU has its own level of “forecastability”, and you may be managing any number of difficult items. Instead, create your own benchmarks based on a sequence of increasingly advanced forecasting methods. Advanced statistical forecasts may seem dauntingly complex at first, so start simple with a basic method, such as forecasting the historical average demand. Then measure how close that simple forecast comes to the actual observed demand. Work up from there to techniques that deal with complications like trend and seasonality. Measure progress using accuracy metrics calculated by your software, such as the mean absolute percentage error (MAPE). This will allow your company to get a little bit better each forecast cycle.

4. Tempo. Then focus efforts on making forecasting a standalone process that isn’t combined with the complex process of inventory optimization. Inventory management is built on a foundation of sound demand forecasting, but it is focused on other topics: what to purchase, when to purchase, minimum order quantities, safety stocks, inventory levels, supplier lead times, etc. Let inventory management go to later. First build up “forecasting muscle” by creating, reviewing, and evolving the forecasting process to have a regular cadence. When your process is sufficiently matured, catch up with the increasing speed of business by increasing the tempo of your forecasting process to at least a monthly cadence.

Remarks

Revising a company’s forecasting process can be a major step. Sometimes it happens when there is executive turnover, sometimes when there is a new ERP system, sometimes when there is new forecasting software. Whatever the precipitating event, this change is an opportunity to rethink and refine whatever process you had before. But trying to eat the whole elephant in one go is a mistake. In this blog, we’ve outlined some discrete steps you can take to make for a successful evolution to a better forecasting process.