Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.
Accuracy is represented as a percentage between zero and 100, while error percentages start at zero but have no upper limit. Reports of MAPE (mean absolute percent error) or other error metrics can be titled “forecast accuracy” reports, which blurs the distinction. So, you may want to know how to convert from the error viewpoint to the accuracy viewpoint that your company espouses. This blog describes how with some examples.
Accuracy metrics are computed such that when the actual equals the forecast then the accuracy is 100% and when the forecast is either double or half of the actual, then accuracy is 0%. Reports that compare the forecast to the actual often include the following:
- The Actual
- The Forecast
- Unit Error = Forecast – Actual
- Absolute Error = Absolute Value of Unit Error
- Absolute % Error = Abs Error / Actual, as a %
- Accuracy % = 100% – Absolute % Error
Look at a couple examples that illustrate the difference in the approaches. Say the Actual = 8 and the forecast is 10.
Unit Error is 10 – 8 = 2
Absolute % Error = 2 / 8, as a % = 0.25 * 100 = 25%
Accuracy = 100% – 25% = 75%.
Now let’s say the actual is 8 and the forecast is 24.
Unit Error is 24– 8 = 16
Absolute % Error = 16 / 8 as a % = 2 * 100 = 200%
Accuracy = 100% – 200% = negative is set to 0%.
In the first example, accuracy measurements provide the same information as error measurements since the forecast and actual are already relatively close. But when the error is more than double the actual, accuracy measurements bottom out at zero. It does correctly indicate the forecast was not at all accurate. But the second example is more accurate than a third, where the actual is 8 and the forecast is 200. That’s a distinction a 0 to 100% range of accuracy doesn’t register. In this final example:
Unit Error is 200 – 8 = 192
Absolute % Error = 192 / 8, as a % = 24 * 100 = 2,400%
Accuracy = 100% – 2,400% = negative is set to 0%.
Error metrics continue to provide information on how far the forecast is from the actual and arguably better represent forecast accuracy.
We encourage adopting the error viewpoint. You simply hope for a small error percentage to indicate the forecast was not far from the actual, instead of hoping for a large accuracy percentage to indicate the forecast was close to the actual. This shift in mindset offers the same insights while eliminating distortions.