Rethinking forecast accuracy: A shift from accuracy to error metrics

Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.

Accuracy is represented as a percentage between zero and 100, while error percentages start at zero but have no upper limit. Reports of MAPE (mean absolute percent error) or other error metrics can be titled “forecast accuracy” reports, which blurs the distinction.  So, you may want to know how to convert from the error viewpoint to the accuracy viewpoint that your company espouses.  This blog describes how with some examples.

Accuracy metrics are computed such that when the actual equals the forecast then the accuracy is 100% and when the forecast is either double or half of the actual, then accuracy is 0%. Reports that compare the forecast to the actual often include the following:

  • The Actual
  • The Forecast
  • Unit Error = Forecast – Actual
  • Absolute Error = Absolute Value of Unit Error
  • Absolute % Error = Abs Error / Actual, as a %
  • Accuracy % = 100% – Absolute % Error

Look at a couple examples that illustrate the difference in the approaches. Say the Actual = 8 and the forecast is 10.

Unit Error is 10 – 8 = 2

Absolute % Error = 2 / 8, as a % = 0.25 * 100 = 25%

Accuracy = 100% – 25% = 75%.

Now let’s say the actual is 8 and the forecast is 24.

Unit Error is 24– 8 = 16

Absolute % Error = 16 / 8 as a % = 2 * 100 = 200%

Accuracy = 100% – 200% = negative is set to 0%.

In the first example, accuracy measurements provide the same information as error measurements since the forecast and actual are already relatively close. But when the error is more than double the actual, accuracy measurements bottom out at zero. It does correctly indicate the forecast was not at all accurate. But the second example is more accurate than a third, where the actual is 8 and the forecast is 200. That’s a distinction a 0 to 100% range of accuracy doesn’t register. In this final example:

Unit Error is 200 – 8 = 192

Absolute % Error = 192 / 8, as a % = 24 * 100 = 2,400%

Accuracy = 100% – 2,400% = negative is set to 0%.

Error metrics continue to provide information on how far the forecast is from the actual and arguably better represent forecast accuracy.

We encourage adopting the error viewpoint. You simply hope for a small error percentage to indicate the forecast was not far from the actual, instead of hoping for a large accuracy percentage to indicate the forecast was close to the actual.  This shift in mindset offers the same insights while eliminating distortions.

 

 

 

 

Don’t Blame Excess Stock on “Bad” Sales / Customer Forecasts

Sales forecasts are often inaccurate simply because the sales team is forced to give a number even though they don’t really know what their customer demand is going to be. Let the sales teams sell.  Don’t bother playing the game of feigning acceptance of these forecasts when both sides (sales and supply chain) know it is often nothing more than a WAG.   Do this instead:

  • Accept demand variability as a fact of life. Develop a planning process that does a better job account for demand variability.
  • Agree on a level of stockout risk that is acceptable across groups of items.
  • Once the stockout risk is agreed to, use software to generate an accurate estimate of the safety stock needed to counter the demand variability.
  • Get buy-in. Customers must be willing to pay a higher price per unit for you to deliver extremely high service levels.  Salespeople must accept that certain items are more likely to have backorders if they prioritize inventory investment on other items.
  • Using a consensus #safetystock process ensures you are properly buffering and setting the right expectations with sales, customers, finance, and supply chain.

 

When you do this, you free all parties from having to play the prediction game they were not equipped to play in the first place. You’ll get better results, such as higher service levels with lower inventory costs. And with much less finger-pointing.

 

 

 

 

A Practical Guide to Growing a Professional Forecasting Process

Many companies looking to improve their forecasting process don’t know where to start. It can be confusing to contend with learning new statistical methods, making sure data is properly structured and updated, agreeing on who “owns” the forecast, defining what ownership means, and measuring accuracy. Having seen this over forty-plus years of practice, we wrote this blog to outline the core focus and to encourage you to keep it simple early on.

1. Objectivity. First, understand and communicate that the Demand Planning and Forecasting process is an exercise in objectivity. The focus is on getting inputs from various sources (stakeholders, customers, functional managers, databases, suppliers, etc.) and deciding whether those inputs add value. For example, if you override a statistical forecast and add 20% to the projection, you should not just assume that you automatically got it right. Instead, be objective and check whether that override increased or decreased forecast accuracy. If you find that your overrides made things worse, you’ve gained something: This informs the process and you know to better scrutinize override decisions in the future.

2.  Teamwork. Recognize that forecasting and demand planning are team sports. Agree on who will captain the team. The captain is responsible for creating the baseline statistical forecasts and supervising the demand planning process. But results depend on everyone on the team making positive contributions, providing data, suggesting alternative methodologies, questioning assumptions, and executing recommended actions. The final results are owned by the company and every single stakeholder.

3. Measurement. Don’t fixate on industry forecast accuracy benchmarks. Every SKU has its own level of “forecastability”, and you may be managing any number of difficult items. Instead, create your own benchmarks based on a sequence of increasingly advanced forecasting methods. Advanced statistical forecasts may seem dauntingly complex at first, so start simple with a basic method, such as forecasting the historical average demand. Then measure how close that simple forecast comes to the actual observed demand. Work up from there to techniques that deal with complications like trend and seasonality. Measure progress using accuracy metrics calculated by your software, such as the mean absolute percentage error (MAPE). This will allow your company to get a little bit better each forecast cycle.

4. Tempo. Then focus efforts on making forecasting a standalone process that isn’t combined with the complex process of inventory optimization. Inventory management is built on a foundation of sound demand forecasting, but it is focused on other topics: what to purchase, when to purchase, minimum order quantities, safety stocks, inventory levels, supplier lead times, etc. Let inventory management go to later. First build up “forecasting muscle” by creating, reviewing, and evolving the forecasting process to have a regular cadence. When your process is sufficiently matured, catch up with the increasing speed of business by increasing the tempo of your forecasting process to at least a monthly cadence.

Remarks

Revising a company’s forecasting process can be a major step. Sometimes it happens when there is executive turnover, sometimes when there is a new ERP system, sometimes when there is new forecasting software. Whatever the precipitating event, this change is an opportunity to rethink and refine whatever process you had before. But trying to eat the whole elephant in one go is a mistake. In this blog, we’ve outlined some discrete steps you can take to make for a successful evolution to a better forecasting process.

 

 

 

 

Types of forecasting problems we help solve

Here are examples of forecasting problems that SmartForecasts can solve, along with the kinds of business data representative of each.

Forecasting an item based on its pattern

Given the following six quarterly sales figures, what sales can you expect for the third and fourth quarters of 2023?

Forecasting an item based on its pattern

Sales by Quarter

SmartForecasts gives you many ways to approach this problem. You can make your own statistical forecasts using any of six different exponential smoothing and moving average methods. Or, like most nontechnical forecasters, you can use the time-saving Automatic command, which has been programmed to automatically select and use the most accurate method for your data. Finally, to incorporate your business judgment into the forecasting process, you can graphically adjust any statistical forecast result using SmartForecasts’ “eyeball” adjustment capabilities.

 

Forecasting an item based on its relationship to other variables.

Given the following historical relationship between unit sales and the number of sales representatives, what sales levels can you expect when the planned increase in sales staff takes place over the final two quarters of 2023?

Forecasting an item based on its relationship to other variables.

Sales and Sales Representatives by Quarter

You can answer a question like this using SmartForecasts’ powerful Regression command, designed specifically to facilitate forecasting applications that require regression analysis solutions. Regression models with an essentially unlimited number of independent/predictor variables are possible, although most useful regression models use only a handful of predictors.

 

Simultaneously forecasting a number of product items and their total

Given the following total sales for all dress shirts and the distribution of sales by color, what will individual and total sales be over the next six months?

Forecasting an item based on its relationship to other variables.

Monthly Dress Shirt Sales by Color

SmartForecasts’ unique Group Forecasting features automatically and simultaneously forecasts closely related time series, such as these items in the same product group. This saves considerable time and provides forecast results not only for the individual items but also for their total. “Eyeball” adjustments at both the item and group levels are easy to make. You can quickly create forecasts for product groups with hundreds or even thousands of items.

 

Forecasting thousands of items automatically

Given the following record of product demand at the SKU level, what can you expect demand to be over the next six months for each of the 5,000 SKUs?

Forecasting thousands of items automatically

Monthly Product Demand by SKU (Stock Keeping Unit)

In just a few minutes, SmartForecasts’ powerful Automatic Selection can take a forecasting job of this size, read the product demand data, automatically create statistical forecasts for each SKU, and saves the result. The results are then ready for export to your ERP system leveraging any one of our API-based connectors or via file export.  Once set up, forecasts will automatically be produced each planning cycle without intervention by the user.

 

Forecasting demand that is most often zero

A distinct and especially challenging type of data to forecast is intermittent demand, which is most often zero but jumps up to random nonzero values at random times. This pattern is typical of demand for slow moving items, such as service parts or big ticket capital goods.

For example, consider the following sample of demand for aircraft service parts. Note the preponderance of zero values with nonzero values mixed in, often in bursts.

Forecasting demand that is most often zero

SmartForecasts has a unique method designed especially for this type of data: the Intermittent Demand forecasting feature. Since intermittent demand arises most often in the context of inventory control, this feature focuses on forecasting the range of likely values for the total demand over a lead time, e.g., cumulative demand over the period Jun-23 to Aug-23 in the example above.

 

Forecasting inventory requirements

Forecasting inventory requirements is a specialized variant of forecasting that focuses on the high end of the range of possible future values.

For simplicity, consider the problem of forecasting inventory requirements for just one period ahead, say one day ahead. Usually, the forecasting job is to estimate the most likely or average level of product demand. However, if available inventory equals the average demand, there is about a 50% chance that demand will exceed inventory, resulting in lost sales and/or lost good will. Setting the inventory level at, say, ten times the average demand will probably eliminate the problem of stockouts, but will just as surely result in bloated inventory costs.

The trick of inventory optimization is to find a satisfactory balance between having enough inventory to meet most demand without tying up too many resources in the process. Usually, the solution is a blend of business judgment and statistics. The judgmental part is to define an acceptable inventory service level, such as meeting 95% of demand immediately from stock. The statistical part is to estimate the 95th percentile of demand.

When not dealing with intermittent demand, SmartForecasts estimates the required inventory level by assuming a bell-shaped (Normal) curve of demand, estimating both the middle and the width of the bell curve, then using a standard statistical formula to estimate the desired percentile. The difference between the desired inventory level and the average level of demand is called the safety stock because it protects against the possibility of stockouts.

When dealing with intermittent demand, the bell-shaped curve is a poor approximation to the statistical distribution of demand. In this special case, SmartForecasts uses patented intermittent demand forecasting technology to estimate the required inventory service level.

 

 

Three Ways to Estimate Forecast Accuracy

Forecast accuracy is a key metric by which to judge the quality of your demand planning process. (It’s not the only one. Others include timeliness and cost; See 5 Demand Planning Tips for Calculating Forecast Uncertainty.) Once you have forecasts, there are a number of ways to summarize their accuracy, usually designated by obscure three- or four-letter acronyms like MAPE, RMSE, and MAE.  See Four Useful Ways to Measure Forecast Error for more detail.

A less discussed but more fundamental issue is how computational experiments are organized for computing forecast error. This post compares the three most important experimental designs. One of them is old-school and essentially amounts to cheating. Another is the gold standard. A third is a useful expedient that mimics the gold standard and is best thought of as predicting how the gold standard will turn out. Figure 1 is a schematic view of the three methods.

 

Three Ways to Estimate Forecast Accuracy Software Smart

Figure 1: Three ways to assess forecast error

 

The top panel of Figure 1 depicts the way forecast error was assessed back in the early 1980’s before we moved the state of the art to the scheme shown in the middle panel. In the old days, forecasts were assessed on the same data used to compute the forecasts. After a model was fit to the data, the errors computed were not for model forecasts but for model fits. The difference is that forecasts are for future values, while fits are for concurrent values. For example, suppose the forecasting model is a simple moving average of the three most recent observations. At time 3, the model computes the average of observations 1, 2, and 3. This average would then be compared to the observed value at time 3. We call this cheating because the observed value at time 3 got a vote on what the forecast should be at time 3. A true forecast assessment would compare the average of the first three observations to the value of the next, fourth, observation. Otherwise, the forecaster is left with an overly optimistic assessment of forecast accuracy.

The bottom panel of Figure 1 shows the best way to assess forecast accuracy. In this schema, all the historical demand data are used to fit a model, which is then used to forecast future, unknown demand values. Eventually, the future unfolds, the true future values reveal themselves, and actual forecast errors can be computed. This is the gold standard. This information populates the “forecasts versus actuals” report in our software.

The middle panel depicts a useful halfway measure. The problem with the gold standard is that you must wait to learn how well your chosen forecasting methods perform. This delay does not help when you are required to choose, in the moment, which forecasting method to use for each item. Nor does it provide a timely estimate of the forecast uncertainty you will experience, which is important for risk management such as forecast hedging. The middle way is based on hold-out analysis, which excludes (“holds out”) the most recent observations and asks the forecasting method to do its work without knowing those ground truths. Then the forecasts based on the foreshortened demand history can be compared to the held-out actual values to get an honest assessment of forecast error.