Improve Forecast Accuracy by Managing Error

The Smart Forecaster

 Pursuing best practices in demand planning,

forecasting and inventory optimization

Improve Forecast Accuracy, Eliminate Excess Inventory, & Maximize Service Levels

In this video, Dr. Thomas Willemain, co-Founder and SVP Research, talks about improving Forecast Accuracy by Managing Error. This video is the first in our series on effective methods to Improve Forecast Accuracy.  We begin by looking at how forecast error causes pain and the consequential cost related to it. Then we will explain the three most common mistakes to avoid that can help us increase revenue and prevent excess inventory. Tom concludes by reviewing the methods to improve Forecast Accuracy, the importance of measuring forecast error, and the technological opportunities to improve it.

 

Forecast error can be consequential

Consider one item of many

  • Product X costs $100 to make and nets $50 profit per unit.
  • Sales of Product X will turn out to be 1,000/month over the next 12 months.
  • Consider one item of many

What is the cost of forecast error?

  • If the forecast is 10% high, end the year with $120,000 of excess inventory.
  • 100 extra/month x 12 months x $100/unit
  • If the forecast is 10% low, miss out on $60,000 of profit.
  • 100 too few/month x 12 months x $50/unit

 

Three mistakes to avoid

1. Ignoring error.

  • Unprofessional, dereliction of duty.
  • Wishing will not make it so.
  • Treat accuracy assessment as data science, not a blame game.

2. Tolerating more error than necessary.

  • Statistical forecasting methods can improve accuracy at scale.
  • Improving data inputs can help.
  • Collecting and analyzing forecast error metrics can identify weak spots.

3. Wasting time and money going too far trying to eliminate error.

  • Some product/market combinations are inherently more difficult to forecast. After a point, let them be (but be alert for new specialized forecasting methods).
  • Sometimes steps meant to reduce error can backfire (e.g., adjustment).
Leave a Comment

RECENT POSTS

Do your statistical forecasts suffer from the wiggle effect?

Do your statistical forecasts suffer from the wiggle effect?

What is the wiggle effect? It’s when your statistical forecast incorrectly predicts the ups and downs observed in your demand history when there really isn’t a pattern. It’s important to make sure your forecasts don’t wiggle unless there is a real pattern. Here is a transcript from a recent customer where this issue was discussed:

How to Handle Statistical Forecasts of Zero

How to Handle Statistical Forecasts of Zero

A statistical forecast of zero can cause lots of confusion for forecasters, especially when the historical demand is non-zero. Sure, it’s obvious that demand is trending downward, but should it trend to zero?

Recent Posts

  • Fifteen questions that reveal how forecasts are computed in your companyFifteen questions that reveal how forecasts are computed in your company
    In a recent LinkedIn post, I detailed four questions that, when answered, will reveal how forecasts are being used in your business. In this article, we’ve listed questions you can ask that will reveal how forecasts are created. […]
  • Businessman and businesswoman reading and analysing spreadsheetThe top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts
    We often encounter Excel-based reorder point planning methods. In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart. We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet. […]
  • Style business group in classic business suits with binoculars and telescopes reproduce different forecasting methodsHow to interpret and manipulate forecast results with different forecast methods
    This blog explains how each forecasting model works using time plots of historical and forecast data. It outlines how to go about choosing which model to use. The examples below show the same history, in red, forecasted with each method, in dark green, compared to the Smart-chosen winning method, in light green. […]
  • Factory worker engineer working in factory using tablet computer to check maintenance boiler water pipe in factory.Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning
    When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
  • What to do when a statistical forecast doesn’t make senseWhat to do when a statistical forecast doesn’t make sense
    Sometimes a statistical forecast just doesn’t make sense. Every forecaster has been there. They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history. When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process. […]

    Inventory Optimization for Manufacturers, Distributors, and MRO

    • Businessman and businesswoman reading and analysing spreadsheetThe top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts
      We often encounter Excel-based reorder point planning methods. In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart. We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet. […]
    • Factory worker engineer working in factory using tablet computer to check maintenance boiler water pipe in factory.Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning
      When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
    • Portrait of factory worker woman with blue hardhat holds tablet and stand in spare parts workplace area. Concept of confident of working with spare parts planning software.Spare Parts Planning Isn’t as Hard as You Think
      When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
    • Worker on a automotive spare parts warehouse using inventory planning softwareService-Level-Driven Planning for Service Parts Businesses
      Service-Level-Driven Service Parts Planning is a four-step process that extends beyond simplified forecasting and rule-of-thumb safety stocks. It provides service parts planners with data-driven, risk-adjusted decision support. […]

      Four Useful Ways to Measure Forecast Error

      The Smart Forecaster

       Pursuing best practices in demand planning,

      forecasting and inventory optimization

      Improve Forecast Accuracy, Eliminate Excess Inventory, & Maximize Service Levels

      In this video, Dr. Thomas Willemain, co-Founder and SVP Research, talks about improving forecast accuracy by measuring forecast error. We begin by overviewing the various types of error metrics: scale-dependent error, percentage error, relative error, and scale-free error metrics. While some error is inevitable, there are ways to reduce it, and forecast metrics are necessary aids for monitoring and improving forecast accuracy. Then we will explain the special problem of intermittent demand and divide-by-zero problems. Tom concludes by explaining how to assess forecasts of multiple items and how it often makes sense to use weighted averages, weighting items differently by volume or revenue.

       

      Four general types of error metrics 

      1. Scale-dependent error
      2. Percentage error
      3. Relative error
      4 .Scale-free error

      Remark: Scale-dependent metrics are expressed in the units of the forecasted variable. The other three are expresses as percentages.

       

      1. Scale-dependent error metrics

      • Mean Absolute Error (MAE) aka Mean Absolute Deviation (MAD)
      • Median Absolute Error (MdAE)
      • Root Mean Square Error (RMSE)
      • These metrics express the error in the original units of the data.
        • Ex: units, cases, barrels, kilograms, dollars, liters, etc.
      • Since forecasts can be too high or too low, the signs of the errors will be either positive or negative, allowing for unwanted cancellations.
        • Ex: You don’t want errors of +50 and -50 to cancel and show “no error”.
      • To deal with the cancellation problem, these metrics take away negative signs by either squaring or using absolute value.

       

      2. Percentage error metric

      • Mean Absolute Percentage Error (MAPE)
      • This metric expresses the size of the error as a percentage of the actual value of the forecasted variable.
      • The advantage of this approach is that it immediately makes clear whether the error is a big deal or not.
      • Ex: Suppose the MAE is 100 units. Is a typical error of 100 units horrible? ok? great?
      • The answer depends on the size of the variable being forecasted. If the actual value is 100, then a MAE = 100 is as big as the thing being forecasted. But if the actual value is 10,000, then a MAE = 100 shows great accuracy, since the MAPE is only 1% of the actual.

       

      3. Relative error metric

      • Median Relative Absolute Error (MdRAE)
      • Relative to what? To a benchmark forecast.
      • What benchmark? Usually, the “naïve” forecast.
      • What is the naïve forecast? Next forecast value = last actual value.
      • Why use the naïve forecast? Because if you can’t beat that, you are in tough shape.

       

      4. Scale-Free error metric

      • Median Relative Scaled Error (MdRSE)
      • This metric expresses the absolute forecast error as a percentage of the natural level of randomness (volatility) in the data.
      • The volatility is measured by the average size of the change in the forecasted variable from one time period to the next.
        • (This is the same as the error made by the naïve forecast.)
      • How does this metric differ from the MdRAE above?
        • They do both use the naïve forecast, but this metric uses errors in forecasting the demand history, while the MdRAE uses errors in forecasting future values.
        • This matters because there are usually many more history values than there are forecasts.
        • In turn, that matters because this metric would “blow up” if all the data were zero, which is less likely when using the demand history.

       

      Intermittent Demand Planning and Parts Forecasting

       

      The special problem of intermittent demand

      • “Intermittent” demand has many zero demands mixed in with random non-zero demands.
      • MAPE gets ruined when errors are divided by zero.
      • MdRAE can also get ruined.
      • MdSAE is less likely to get ruined.

       

      Recap and remarks

      • Forecast metrics are necessary aids for monitoring and improving forecast accuracy.
      • There are two major classes of metrics: absolute and relative.
      • Absolute measures (MAE, MdAE, RMSE) are natural choices when assessing forecasts of one item.
      • Relative measures (MAPE, MdRAE, MdSAE) are useful when comparing accuracy across items or between alternative forecasts of the same item or assessing accuracy relative to the natural variability of an item.
      • Intermittent demand presents divide-by-zero problems which favor MdSAE over MAPE.
      • When assessing forecasts of multiple items, it often makes sense to use weighted averages, weighting items differently by volume or revenue.
      Leave a Comment

      RECENT POSTS

      Do your statistical forecasts suffer from the wiggle effect?

      Do your statistical forecasts suffer from the wiggle effect?

      What is the wiggle effect? It’s when your statistical forecast incorrectly predicts the ups and downs observed in your demand history when there really isn’t a pattern. It’s important to make sure your forecasts don’t wiggle unless there is a real pattern. Here is a transcript from a recent customer where this issue was discussed:

      How to Handle Statistical Forecasts of Zero

      How to Handle Statistical Forecasts of Zero

      A statistical forecast of zero can cause lots of confusion for forecasters, especially when the historical demand is non-zero. Sure, it’s obvious that demand is trending downward, but should it trend to zero?

      Recent Posts

      • Fifteen questions that reveal how forecasts are computed in your companyFifteen questions that reveal how forecasts are computed in your company
        In a recent LinkedIn post, I detailed four questions that, when answered, will reveal how forecasts are being used in your business. In this article, we’ve listed questions you can ask that will reveal how forecasts are created. […]
      • Businessman and businesswoman reading and analysing spreadsheetThe top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts
        We often encounter Excel-based reorder point planning methods. In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart. We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet. […]
      • Style business group in classic business suits with binoculars and telescopes reproduce different forecasting methodsHow to interpret and manipulate forecast results with different forecast methods
        This blog explains how each forecasting model works using time plots of historical and forecast data. It outlines how to go about choosing which model to use. The examples below show the same history, in red, forecasted with each method, in dark green, compared to the Smart-chosen winning method, in light green. […]
      • Factory worker engineer working in factory using tablet computer to check maintenance boiler water pipe in factory.Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning
        When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
      • What to do when a statistical forecast doesn’t make senseWhat to do when a statistical forecast doesn’t make sense
        Sometimes a statistical forecast just doesn’t make sense. Every forecaster has been there. They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history. When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process. […]

        Inventory Optimization for Manufacturers, Distributors, and MRO

        • Businessman and businesswoman reading and analysing spreadsheetThe top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts
          We often encounter Excel-based reorder point planning methods. In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart. We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet. […]
        • Factory worker engineer working in factory using tablet computer to check maintenance boiler water pipe in factory.Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning
          When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
        • Portrait of factory worker woman with blue hardhat holds tablet and stand in spare parts workplace area. Concept of confident of working with spare parts planning software.Spare Parts Planning Isn’t as Hard as You Think
          When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
        • Worker on a automotive spare parts warehouse using inventory planning softwareService-Level-Driven Planning for Service Parts Businesses
          Service-Level-Driven Service Parts Planning is a four-step process that extends beyond simplified forecasting and rule-of-thumb safety stocks. It provides service parts planners with data-driven, risk-adjusted decision support. […]

          Leading Indicators can Foreshadow Demand

          The Smart Forecaster

          Pursuing best practices in demand planning,

          forecasting and inventory optimization

          Most statistical forecasting works in one direct flow from past data to forecast. Forecasting with leading indicators works a different way. A leading indicator is a second variable that may influence the one being forecasted. Applying testable human knowledge about the predictive power in the relationship between these different sets of data will sometimes provide superior accuracy.

          Most of the time, a forecast is based solely on the past history of the item being forecast. Let’s assume that the forecaster’s problem is to predict future unit sales of an important product. The process begins with gathering data on the product’s past sales. (Gregory Hartunian shares some practical advice on choosing the best available data in a previous post to the Smart Forecaster.) This data flows into forecasting software, which analyzes the sales record to measure the level of random variability and exploit any predictable aspects, such as trend or regular patterns of seasonal variability. The forecast is based entirely on the past behavior of the item being forecasted. Nothing that might have caused the wiggles and jiggles in the product’s sales graph is explicitly accounted for. This approach is fast, simple, self-contained and scalable, because software can zip through a huge number of forecasts automatically.

          But sometimes the forecaster can do better, at the cost of more work. If the forecaster can peer through the fog of randomness and identify a second variable that influences the one being forecasted, a leading indicator, more accurate predictions are possible.

          For example, suppose the product is window glass for houses. It may well be that increases or decreases in the number of construction permits for new houses will be reflected in corresponding increases or decreases in the number of sheets of glass ordered several months later. If the forecaster can distill this “lagged” or delayed relationship into an equation, that equation can be used to forecast glass sales several months hence using known values of the leading indicator. This equation is called a “regression equation” and has a form something like:

          Sales of glass in 3 months = 210.9 + 26.7 × Number of housing starts this month.

          Forecasting software can take the housing start and glass sales data and convert them into such a regression equation.

          Graph displaying a relationship between example figures for time-shifted building permits and demand for glass
          Leading indicators demonstrated
          However, unlike automatic statistical forecasting based on a product’s past sales, forecasting with a leading indicator faces the same problem as the proverbial recipe for rabbit stew: “First catch a rabbit”. Here the forecaster’s subject matter expertise is critical to success. The forecaster must be able to nominate one or more candidates for the job of leading indicator. After this crucial step, based on the forecaster’s knowledge, experience and intuition, then software can be used to verify that there really is a predictive, time-delayed relationship between the candidate leading indicator and the variable to be forecasted.

          This verification step is done using a “cross-correlation” analysis. The software essentially takes as input a sequence of values of the variable to be forecasted and another sequence of values of the supposed leading indicator. Then it slides the data from the forecast variable ahead by, successively, one, two, three, etc. time periods. At each slip in time (called a “lag”, because the leading indicator is lagging further and further behind the forecast variable), the software checks for a pattern of association between the two variables. If it finds a pattern that is too strong to be explained as a statistical accident, the forecaster’s hunch is confirmed.

          Obviously, forecasting with leading indicators is more work than forecasting using only an item’s own past values. The forecaster has to identify a leading indicator, starting with a list suggested by the forecaster’s subject matter expertise. This is a “hand-crafting” process that is not suited to mass production of forecasts. But it can be a successful approach for a smaller number of important items that are worth the extra effort. The role of forecasting software, such as our SmartForecasts system, is to help the forecaster authenticate the leading indicator and then exploit it.

          Thomas Willemain, PhD, co-founded Smart Software and currently serves as Senior Vice President for Research. Dr. Willemain also serves as Professor Emeritus of Industrial and Systems Engineering at Rensselaer Polytechnic Institute and as a member of the research staff at the Center for Computing Sciences, Institute for Defense Analyses.

          Leave a Comment

          Related Posts

          What to do when a statistical forecast doesn’t make sense

          What to do when a statistical forecast doesn’t make sense

          Sometimes a statistical forecast just doesn’t make sense. Every forecaster has been there. They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history. When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process.

          Recent Posts

          • Fifteen questions that reveal how forecasts are computed in your companyFifteen questions that reveal how forecasts are computed in your company
            In a recent LinkedIn post, I detailed four questions that, when answered, will reveal how forecasts are being used in your business. In this article, we’ve listed questions you can ask that will reveal how forecasts are created. […]
          • Businessman and businesswoman reading and analysing spreadsheetThe top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts
            We often encounter Excel-based reorder point planning methods. In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart. We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet. […]
          • Style business group in classic business suits with binoculars and telescopes reproduce different forecasting methodsHow to interpret and manipulate forecast results with different forecast methods
            This blog explains how each forecasting model works using time plots of historical and forecast data. It outlines how to go about choosing which model to use. The examples below show the same history, in red, forecasted with each method, in dark green, compared to the Smart-chosen winning method, in light green. […]
          • Factory worker engineer working in factory using tablet computer to check maintenance boiler water pipe in factory.Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning
            When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
          • What to do when a statistical forecast doesn’t make senseWhat to do when a statistical forecast doesn’t make sense
            Sometimes a statistical forecast just doesn’t make sense. Every forecaster has been there. They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history. When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process. […]

            Inventory Optimization for Manufacturers, Distributors, and MRO

            • Businessman and businesswoman reading and analysing spreadsheetThe top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts
              We often encounter Excel-based reorder point planning methods. In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart. We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet. […]
            • Factory worker engineer working in factory using tablet computer to check maintenance boiler water pipe in factory.Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning
              When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
            • Portrait of factory worker woman with blue hardhat holds tablet and stand in spare parts workplace area. Concept of confident of working with spare parts planning software.Spare Parts Planning Isn’t as Hard as You Think
              When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think. […]
            • Worker on a automotive spare parts warehouse using inventory planning softwareService-Level-Driven Planning for Service Parts Businesses
              Service-Level-Driven Service Parts Planning is a four-step process that extends beyond simplified forecasting and rule-of-thumb safety stocks. It provides service parts planners with data-driven, risk-adjusted decision support. […]

              Forecasting With the Right Data

              The Smart Forecaster

              Pursuing best practices in demand planning,

              forecasting and inventory optimization

              In order to reap the efficiency benefits of forecasting, you need the most accurate forecasts—forecasts built on the most appropriate historical data. Most discussions of this issue tend to focus on the merits of using demand vs. shipment history—and I’ll comment on this later. But first, let’s talk about the use of net vs. gross data.

              Net vs. Gross History

              Many planners are inclined to use net sales data to create their forecasts. Systems that track sales capture transactions as they occur and aggregate results into weekly or monthly periodic totals. In some cases, sales records account for returned purchases as negative sales and compute a net total. These net figures, which often mask real sales patterns, are fed into the forecasting system. The historical data used actually presents a false sense of what the customer wanted, and when they wanted it. This will carry forward into the forecast, with less than optimal results.

              (more…)