Improve Forecast Accuracy by Managing Error

The Smart Forecaster

 Pursuing best practices in demand planning,

forecasting and inventory optimization

Improve Forecast Accuracy, Eliminate Excess Inventory, & Maximize Service Levels

In this video, Dr. Thomas Willemain, co-Founder and SVP Research, talks about improving Forecast Accuracy by Managing Error. This video is the first in our series on effective methods to Improve Forecast Accuracy.  We begin by looking at how forecast error causes pain and the consequential cost related to it. Then we will explain the three most common mistakes to avoid that can help us increase revenue and prevent excess inventory. Tom concludes by reviewing the methods to improve Forecast Accuracy, the importance of measuring forecast error, and the technological opportunities to improve it.

 

Forecast error can be consequential

Consider one item of many

  • Product X costs $100 to make and nets $50 profit per unit.
  • Sales of Product X will turn out to be 1,000/month over the next 12 months.
  • Consider one item of many

What is the cost of forecast error?

  • If the forecast is 10% high, end the year with $120,000 of excess inventory.
  • 100 extra/month x 12 months x $100/unit
  • If the forecast is 10% low, miss out on $60,000 of profit.
  • 100 too few/month x 12 months x $50/unit

 

Three mistakes to avoid

1. Ignoring error.

  • Unprofessional, dereliction of duty.
  • Wishing will not make it so.
  • Treat accuracy assessment as data science, not a blame game.

2. Tolerating more error than necessary.

  • Statistical forecasting methods can improve accuracy at scale.
  • Improving data inputs can help.
  • Collecting and analyzing forecast error metrics can identify weak spots.

3. Wasting time and money going too far trying to eliminate error.

  • Some product/market combinations are inherently more difficult to forecast. After a point, let them be (but be alert for new specialized forecasting methods).
  • Sometimes steps meant to reduce error can backfire (e.g., adjustment).
Leave a Comment

RECENT POSTS

Top Five Tips for New Demand Planners and Forecasters

Top Five Tips for New Demand Planners and Forecasters

Good forecasting can make a big difference to your company’s performance, whether you are forecasting to support sales, marketing, production, inventory, or finance. This blog is aimed primarily at those fortunate individuals who are about to start this adventure. Welcome to the field!

Recent Posts

  • Top Five Tips for New Demand Planners and ForecastersTop Five Tips for New Demand Planners and Forecasters
    Good forecasting can make a big difference to your company’s performance, whether you are forecasting to support sales, marketing, production, inventory, or finance. This blog is aimed primarily at those fortunate individuals who are about to start this adventure. Welcome to the field! […]
  • Dynamics Community Summit eventSmart Software to Present at Community Summit North America
    Smart Software's Channel Sales Director and Enterprise Solution Engineer, to present three sessions at this year’s Microsoft Dynamics Community Summit North America event in Orlando, FL. . […]

    Four Useful Ways to Measure Forecast Error

    The Smart Forecaster

     Pursuing best practices in demand planning,

    forecasting and inventory optimization

    Improve Forecast Accuracy, Eliminate Excess Inventory, & Maximize Service Levels

    In this video, Dr. Thomas Willemain, co-Founder and SVP Research, talks about improving forecast accuracy by measuring forecast error. We begin by overviewing the various types of error metrics: scale-dependent error, percentage error, relative error, and scale-free error metrics. While some error is inevitable, there are ways to reduce it, and forecast metrics are necessary aids for monitoring and improving forecast accuracy. Then we will explain the special problem of intermittent demand and divide-by-zero problems. Tom concludes by explaining how to assess forecasts of multiple items and how it often makes sense to use weighted averages, weighting items differently by volume or revenue.

     

    Four general types of error metrics 

    1. Scale-dependent error
    2. Percentage error
    3. Relative error
    4 .Scale-free error

    Remark: Scale-dependent metrics are expressed in the units of the forecasted variable. The other three are expresses as percentages.

     

    1. Scale-dependent error metrics

    • Mean Absolute Error (MAE) aka Mean Absolute Deviation (MAD)
    • Median Absolute Error (MdAE)
    • Root Mean Square Error (RMSE)
    • These metrics express the error in the original units of the data.
      • Ex: units, cases, barrels, kilograms, dollars, liters, etc.
    • Since forecasts can be too high or too low, the signs of the errors will be either positive or negative, allowing for unwanted cancellations.
      • Ex: You don’t want errors of +50 and -50 to cancel and show “no error”.
    • To deal with the cancellation problem, these metrics take away negative signs by either squaring or using absolute value.

     

    2. Percentage error metric

    • Mean Absolute Percentage Error (MAPE)
    • This metric expresses the size of the error as a percentage of the actual value of the forecasted variable.
    • The advantage of this approach is that it immediately makes clear whether the error is a big deal or not.
    • Ex: Suppose the MAE is 100 units. Is a typical error of 100 units horrible? ok? great?
    • The answer depends on the size of the variable being forecasted. If the actual value is 100, then a MAE = 100 is as big as the thing being forecasted. But if the actual value is 10,000, then a MAE = 100 shows great accuracy, since the MAPE is only 1% of the actual.

     

    3. Relative error metric

    • Median Relative Absolute Error (MdRAE)
    • Relative to what? To a benchmark forecast.
    • What benchmark? Usually, the “naïve” forecast.
    • What is the naïve forecast? Next forecast value = last actual value.
    • Why use the naïve forecast? Because if you can’t beat that, you are in tough shape.

     

    4. Scale-Free error metric

    • Median Relative Scaled Error (MdRSE)
    • This metric expresses the absolute forecast error as a percentage of the natural level of randomness (volatility) in the data.
    • The volatility is measured by the average size of the change in the forecasted variable from one time period to the next.
      • (This is the same as the error made by the naïve forecast.)
    • How does this metric differ from the MdRAE above?
      • They do both use the naïve forecast, but this metric uses errors in forecasting the demand history, while the MdRAE uses errors in forecasting future values.
      • This matters because there are usually many more history values than there are forecasts.
      • In turn, that matters because this metric would “blow up” if all the data were zero, which is less likely when using the demand history.

     

    Intermittent Demand Planning and Parts Forecasting

     

    The special problem of intermittent demand

    • “Intermittent” demand has many zero demands mixed in with random non-zero demands.
    • MAPE gets ruined when errors are divided by zero.
    • MdRAE can also get ruined.
    • MdSAE is less likely to get ruined.

     

    Recap and remarks

    • Forecast metrics are necessary aids for monitoring and improving forecast accuracy.
    • There are two major classes of metrics: absolute and relative.
    • Absolute measures (MAE, MdAE, RMSE) are natural choices when assessing forecasts of one item.
    • Relative measures (MAPE, MdRAE, MdSAE) are useful when comparing accuracy across items or between alternative forecasts of the same item or assessing accuracy relative to the natural variability of an item.
    • Intermittent demand presents divide-by-zero problems which favor MdSAE over MAPE.
    • When assessing forecasts of multiple items, it often makes sense to use weighted averages, weighting items differently by volume or revenue.
    Leave a Comment

    RECENT POSTS

    Top Five Tips for New Demand Planners and Forecasters

    Top Five Tips for New Demand Planners and Forecasters

    Good forecasting can make a big difference to your company’s performance, whether you are forecasting to support sales, marketing, production, inventory, or finance. This blog is aimed primarily at those fortunate individuals who are about to start this adventure. Welcome to the field!

    Recent Posts

    • Top Five Tips for New Demand Planners and ForecastersTop Five Tips for New Demand Planners and Forecasters
      Good forecasting can make a big difference to your company’s performance, whether you are forecasting to support sales, marketing, production, inventory, or finance. This blog is aimed primarily at those fortunate individuals who are about to start this adventure. Welcome to the field! […]
    • Dynamics Community Summit eventSmart Software to Present at Community Summit North America
      Smart Software's Channel Sales Director and Enterprise Solution Engineer, to present three sessions at this year’s Microsoft Dynamics Community Summit North America event in Orlando, FL. . […]

      Leading Indicators can Foreshadow Demand

      The Smart Forecaster

      Pursuing best practices in demand planning,

      forecasting and inventory optimization

      Most statistical forecasting works in one direct flow from past data to forecast. Forecasting with leading indicators works a different way. A leading indicator is a second variable that may influence the one being forecasted. Applying testable human knowledge about the predictive power in the relationship between these different sets of data will sometimes provide superior accuracy.

      Most of the time, a forecast is based solely on the past history of the item being forecast. Let’s assume that the forecaster’s problem is to predict future unit sales of an important product. The process begins with gathering data on the product’s past sales. (Gregory Hartunian shares some practical advice on choosing the best available data in a previous post to the Smart Forecaster.) This data flows into forecasting software, which analyzes the sales record to measure the level of random variability and exploit any predictable aspects, such as trend or regular patterns of seasonal variability. The forecast is based entirely on the past behavior of the item being forecasted. Nothing that might have caused the wiggles and jiggles in the product’s sales graph is explicitly accounted for. This approach is fast, simple, self-contained and scalable, because software can zip through a huge number of forecasts automatically.

      But sometimes the forecaster can do better, at the cost of more work. If the forecaster can peer through the fog of randomness and identify a second variable that influences the one being forecasted, a leading indicator, more accurate predictions are possible.

      For example, suppose the product is window glass for houses. It may well be that increases or decreases in the number of construction permits for new houses will be reflected in corresponding increases or decreases in the number of sheets of glass ordered several months later. If the forecaster can distill this “lagged” or delayed relationship into an equation, that equation can be used to forecast glass sales several months hence using known values of the leading indicator. This equation is called a “regression equation” and has a form something like:

      Sales of glass in 3 months = 210.9 + 26.7 × Number of housing starts this month.

      Forecasting software can take the housing start and glass sales data and convert them into such a regression equation.

      Graph displaying a relationship between example figures for time-shifted building permits and demand for glass
      Leading indicators demonstrated
      However, unlike automatic statistical forecasting based on a product’s past sales, forecasting with a leading indicator faces the same problem as the proverbial recipe for rabbit stew: “First catch a rabbit”. Here the forecaster’s subject matter expertise is critical to success. The forecaster must be able to nominate one or more candidates for the job of leading indicator. After this crucial step, based on the forecaster’s knowledge, experience and intuition, then software can be used to verify that there really is a predictive, time-delayed relationship between the candidate leading indicator and the variable to be forecasted.

      This verification step is done using a “cross-correlation” analysis. The software essentially takes as input a sequence of values of the variable to be forecasted and another sequence of values of the supposed leading indicator. Then it slides the data from the forecast variable ahead by, successively, one, two, three, etc. time periods. At each slip in time (called a “lag”, because the leading indicator is lagging further and further behind the forecast variable), the software checks for a pattern of association between the two variables. If it finds a pattern that is too strong to be explained as a statistical accident, the forecaster’s hunch is confirmed.

      Obviously, forecasting with leading indicators is more work than forecasting using only an item’s own past values. The forecaster has to identify a leading indicator, starting with a list suggested by the forecaster’s subject matter expertise. This is a “hand-crafting” process that is not suited to mass production of forecasts. But it can be a successful approach for a smaller number of important items that are worth the extra effort. The role of forecasting software, such as our SmartForecasts system, is to help the forecaster authenticate the leading indicator and then exploit it.

      Thomas Willemain, PhD, co-founded Smart Software and currently serves as Senior Vice President for Research. Dr. Willemain also serves as Professor Emeritus of Industrial and Systems Engineering at Rensselaer Polytechnic Institute and as a member of the research staff at the Center for Computing Sciences, Institute for Defense Analyses.

      Leave a Comment

      Related Posts

      A Primer on Probabilistic Forecasting

      A Primer on Probabilistic Forecasting

      If you keep up with the news about supply chain analytics, you are more frequently encountering the phrase “probabilistic forecasting.” Probabilistic forecasts have the ability to simulate future values that aren’t anchored to the past. If this phrase is puzzling, read on.

      Recent Posts

      • Top Five Tips for New Demand Planners and ForecastersTop Five Tips for New Demand Planners and Forecasters
        Good forecasting can make a big difference to your company’s performance, whether you are forecasting to support sales, marketing, production, inventory, or finance. This blog is aimed primarily at those fortunate individuals who are about to start this adventure. Welcome to the field! […]
      • Dynamics Community Summit eventSmart Software to Present at Community Summit North America
        Smart Software's Channel Sales Director and Enterprise Solution Engineer, to present three sessions at this year’s Microsoft Dynamics Community Summit North America event in Orlando, FL. . […]

        Forecasting With the Right Data

        The Smart Forecaster

        Pursuing best practices in demand planning,

        forecasting and inventory optimization

        In order to reap the efficiency benefits of forecasting, you need the most accurate forecasts—forecasts built on the most appropriate historical data. Most discussions of this issue tend to focus on the merits of using demand vs. shipment history—and I’ll comment on this later. But first, let’s talk about the use of net vs. gross data.

        Net vs. Gross History

        Many planners are inclined to use net sales data to create their forecasts. Systems that track sales capture transactions as they occur and aggregate results into weekly or monthly periodic totals. In some cases, sales records account for returned purchases as negative sales and compute a net total. These net figures, which often mask real sales patterns, are fed into the forecasting system. The historical data used actually presents a false sense of what the customer wanted, and when they wanted it. This will carry forward into the forecast, with less than optimal results.

        (more…)