In a recent LinkedIn post, I detailed four questions that, when answered, will reveal how forecasts are being used in your business.  In this article, we’ve listed questions you can ask that will reveal how forecasts are created.

1. When we ask users how they create forecasts, their answer will often be “we use history.” This obviously isn’t enough information, as there are different types of demand history that require different forecasting methods. If you are using historical data, then make sure to find out if you are using an averaging model, a trending model, a seasonal model, or something else to forecast.

2. Once you know the model used, ask about the parameter values of those models. The forecast output of an “average” will differ, sometimes significantly, depending on the number of periods you are averaging.  So, find out whether you are using an average of the last 3 months, 6 months, 12 months, etc.

3. If you are using trending models, ask how the model weights are set. For example, in a trending model, such as double exponential smoothing, the forecasts will differ significantly depending on how the calculations weight recent data compared to older data (higher weights put more emphasis on the recent data).

4. If you are using seasonal models, the forecast results are going to be impacted by the “level” and “trending weights” used. You should also determine whether seasonal periods are forecasted with multiplicative or additive seasonality.  (Additive seasonality says, e.g., “Add 100 units for July”, whereas multiplicative seasonality says “Multiply by 1.25 for July.”) Finally, you may not be using these types of methods at all.  Some practitioners will use a forecast method that simply averages prior periods (i.e., next June will be forecasted based on the average of the prior three Junes).

5. How do you go about choosing one model over another? Does the choice of technique depend on the type of demand data or when new demand data are available? Is this process automated? Or if a planner chooses a trend model subjectively, will that item continue to be forecasted with that model until the planner changes it again?

6. Are your forecasts “fully automatic,” so that trend and/or seasonality are detected automatically? Or are your forecasts dependent on item classifications that must be maintained by users? The latter requires more time and attention from planners to define what behavior constitutes trend, seasonality, etc.

7. What are the item classification rules used? For example, an item may be considered a trending item if demand increases by more than 5% period-over-period. An item may be considered seasonal if 70% or more of the annual demand occurs in four or fewer periods. Such rules are user-defined and often require overly broad assumptions. Sometimes they are configured when a system was originally implemented but never revised even as conditions change. It’s important to make sure any classification rules are understood and, if necessary, updated.

8. Does the forecast regenerate automatically when new data are available, or do you have to manually regenerate the forecasts?

9. Do you check for any change in forecast from one period to the next before deciding whether to use the new forecast? Or do you default to the new forecast?

10. How are forecast overrides that were made in prior planning cycles treated when a new forecast is created? Are they reused or replaced?

11. How do you incorporate forecasts made by your sales team or by your customers? Do these forecasts replace the baseline forecast, or do you use these inputs to make planner overrides to the baseline forecast?

12. Under what circumstances would you ignore the baseline forecast and use exactly what sales or customers are telling you?

13. If you rely on customer forecasts, what do you do about customers who don’t provide forecasts?

14. How do you document the effectiveness of your forecasting approach?  Most companies only measure the accuracy of the final forecast that is submitted to the ERP system, if they measure anything. But they don’t assess alternative predictions that might have been used. It is important to compare what you are doing to benchmarks. For example, do the methods you are using outperform a naïve forecast (i.e., “tomorrow equals today,” which requires no thought), or what you saw last year, or the average of the last 12 months.  Benchmarking your baseline forecast insures you are squeezing as much accuracy as possible out of the data.

15. Do you measure whether overrides from sales, customers, and planners are making the forecast better or worse? This is just as important as measuring whether your statistical approaches are outperforming the naïve method.  If you don’t know whether overrides are helping or hurting, the business can’t get better at forecasting – you need to know which steps are adding value so that you can do more of those and get even better. If you aren’t documenting forecast accuracy and conducting “forecast value add” analysis, then you aren’t able to properly assess whether the forecasts being produced are the best you could make.  You’ll miss opportunities to improve the process, increase accuracy, and educate the business on what type of forecast error is to be expected.