The Forecast Matters, but Maybe Not the Way You Think

True or false: The forecast doesn’t matter to spare parts inventory management.

At first glance, this statement seems obviously false. After all, forecasts are crucial for planning stock levels, right?

It depends on what you mean by a “forecast”. If you mean an old-school single-number forecast (“demand for item CX218b will be 3 units next week and 6 units the week after”), then no. If you broaden the meaning of forecast to include a probability distribution taking account of uncertainties in both demand and supply, then yes.

The key reality is that many items, especially spare and service parts, have unpredictable, intermittent demand. (Supplier lead times can also be erratic, especially when parts are sourced from a backlogged OEM.)  We have observed that while manufacturers and distributors typically experience intermittent demand on just 20% or more of their items the percentage grows to 80%+ for MRO based businesses.  This means historical data often show periods of zero demand interspersed with random periods of non-zero demand. Sometimes, these non-zero demands are as low as 1 or 2 units, while at other times, they unexpectedly spike to quantities several times larger than their average.

This isn’t like the kind of data usually faced by your peer “demand planners” in retail, consumer products, and food and beverage. Those folks usually deal with larger quantities having proportionately less randomness. And they can surf on prediction-enhancing features like trends and stable seasonal patterns. Instead, spare parts usage is much more random, throwing a monkey wrench into the planning process, even in the minority of cases in which there are detectable seasonal variations.

In the realm of intermittent demand, the best forecast available will significantly deviate from the actual demand. Unlike consumer products with medium to high volume and frequency, a service part’s forecast can miss the mark by hundreds of percentage points. A forecast of one or two units, on average, will always miss when the actual demand is zero. Even with advanced business intelligence or machine learning algorithms, the error in forecasting the non-zero demands will still be substantial.

Perhaps because of the difficulty of statistical forecasting in the inventory domain, inventory planning in practice often relies on intuition and planner knowledge. Unfortunately, this approach doesn’t scale across tens of thousands of parts. Intuition just cannot cope with the full range of demand and lead time possibilities, let alone accurately estimate the  probability of each possible scenario. Even if your company has one or two exceptional intuitive forecasters, personnel retirements and product line reorganizations mean that intuitive forecasting can’t be relied on going forward.

The solution lies in shifting focus from traditional forecasts to predicting probabilities for each potential demand and lead time scenario. This shift transforms the conversation from an unrealistic “one number plan” to a range of numbers with associated probabilities. By predicting probabilities for each demand and lead time possibility, you can better align stock levels with the risk tolerance for each group of parts.

Software that generates demand and lead time scenarios, repeating this process tens of thousands of times, can accurately simulate how current stocking policies will perform against these policies. If the performance in the simulation falls short and you are predicted to stock out more often than you are comfortable with or you are left with excess inventory, conducting what-if scenarios allows adjustments to policies. You can then predict how these revised policies will fare against random demands and lead times. You can conduct this process iteratively and refine it with each new what-if scenario or lean on system prescribed policies that optimally strike a balance between risk and costs.

So, if you are planning service and spare parts inventories, stop worrying about predicting demand the way traditional retail and CPG demand planners do it. Focus instead on how your stocking policies will withstand the randomness of the future, adjusting them based on your risk tolerance. To do this, you’ll need the right set of decision support software, and this is how Smart Software can help.

 

 

Spare Parts Planning Software solutions

Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

 

 

White Paper: What you Need to know about Forecasting and Planning Service Parts

 

This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

 

    You Need to Team up with the Algorithms

    Over forty years ago, Smart Software consisted of three friends working to start a company in a church basement. Today, our team has expanded to operate from multiple locations across Massachusetts, New Hampshire and Texas, with team members in England, Spain, Armenia and India. Like many of you in your jobs,  we have found ways to make distributed teams work for us and for you.

    This note is about a different kind of teamwork: the collaboration between you and our software that happens at your fingertips. I often write about the software itself and what goes on “under the hood”. This time, my subject is how you should best team up with the software.

    Our software suite, Smart Inventory Planning and Optimization (Smart IP&O™) is capable of massively detailed calculations of future demand and the inventory control parameters (e.g., reorder points and order quantities) that would most effectively manage that demand. But your input is required to make the most of all that power. You need to team up with the algorithms.

    That interaction can take several forms. You can start by simply assessing how you are doing now. The report writing functions in Smart IP&O (Smart Operational Analytics™) can collate and analyze all your transactional data to measure your Key Performance Indicators (KPIs), both financial (e.g., inventory investment) and operational (e.g., fill rates).

    The next step might be to use SIO (Smart Inventory Optimization™), the inventory analytics within SIP&O, to play “what-if” games with the software. For example, you might ask “What if we reduced the order quantity on item 1234 from 50 to 40?” The software grinds the numbers to let you know how that would play out, then you react. This can be useful, but what if you have 50,000 items to consider? You would want to do what-if games for a few critical items, but not all of them.

    The real power comes with using the automatic optimization capability in SIO. Here you can team with the algorithms at scale. Using your business judgement, you can create “groups”, i.e., collections of items that share some critical features. For example, you might create a group for “critical spare parts for electric utility customers” consisting of 1,200 parts. Then again calling on your business judgement, you could specify what item availability standard should apply to all the items in that group (e.g., “at least 95% chance of not stocking out in a year”). Now the software can take over and automatically work out the best reorder points and order quantities for every one of those items to achieve your required item availability at the lowest possible total cost. And that, dear reader, is powerful teamwork.

     

     

    Rethinking forecast accuracy: A shift from accuracy to error metrics

    Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.

    Accuracy is represented as a percentage between zero and 100, while error percentages start at zero but have no upper limit. Reports of MAPE (mean absolute percent error) or other error metrics can be titled “forecast accuracy” reports, which blurs the distinction.  So, you may want to know how to convert from the error viewpoint to the accuracy viewpoint that your company espouses.  This blog describes how with some examples.

    Accuracy metrics are computed such that when the actual equals the forecast then the accuracy is 100% and when the forecast is either double or half of the actual, then accuracy is 0%. Reports that compare the forecast to the actual often include the following:

    • The Actual
    • The Forecast
    • Unit Error = Forecast – Actual
    • Absolute Error = Absolute Value of Unit Error
    • Absolute % Error = Abs Error / Actual, as a %
    • Accuracy % = 100% – Absolute % Error

    Look at a couple examples that illustrate the difference in the approaches. Say the Actual = 8 and the forecast is 10.

    Unit Error is 10 – 8 = 2

    Absolute % Error = 2 / 8, as a % = 0.25 * 100 = 25%

    Accuracy = 100% – 25% = 75%.

    Now let’s say the actual is 8 and the forecast is 24.

    Unit Error is 24– 8 = 16

    Absolute % Error = 16 / 8 as a % = 2 * 100 = 200%

    Accuracy = 100% – 200% = negative is set to 0%.

    In the first example, accuracy measurements provide the same information as error measurements since the forecast and actual are already relatively close. But when the error is more than double the actual, accuracy measurements bottom out at zero. It does correctly indicate the forecast was not at all accurate. But the second example is more accurate than a third, where the actual is 8 and the forecast is 200. That’s a distinction a 0 to 100% range of accuracy doesn’t register. In this final example:

    Unit Error is 200 – 8 = 192

    Absolute % Error = 192 / 8, as a % = 24 * 100 = 2,400%

    Accuracy = 100% – 2,400% = negative is set to 0%.

    Error metrics continue to provide information on how far the forecast is from the actual and arguably better represent forecast accuracy.

    We encourage adopting the error viewpoint. You simply hope for a small error percentage to indicate the forecast was not far from the actual, instead of hoping for a large accuracy percentage to indicate the forecast was close to the actual.  This shift in mindset offers the same insights while eliminating distortions.

     

     

     

     

    Every Forecasting Model is Good for What it is Designed for

    ​When you should use traditional extrapolative forecasting techniques.

    With so much hype around new Machine Learning (ML) and probabilistic forecasting methods, the traditional “extrapolative” or “time series” statistical forecasting methods seem to be getting the cold shoulder.  However, it is worth remembering that these traditional techniques (such as single and double exponential smoothing, linear and simple moving averaging, and Winters models for seasonal items) often work quite well for higher volume data. Every method is good for what it was designed to do.  Just apply each appropriately, as in don’t bring a knife to a gunfight and don’t use a jackhammer when a simple hand hammer will do. 

    Extrapolative methods perform well when demand has high volume and is not too granular (i.e., demand is bucketed monthly or quarterly). They are also very fast and do not use as many computing resources as probabilistic and ML methods. This makes them very accessible.

    Are the traditional methods as accurate as newer forecasting methods?  Smart has found that extrapolative methods do very poorly when demand is intermittent. However, when demand is higher volume, they only do slightly worse than our new probabilistic methods when demand is bucketed monthly.  Given their accessibility, speed, and the fact you are going to apply forecast overrides based on business knowledge, the baseline accuracy difference here will not be material.

    The advantage of more advanced models like Smart’s GEN2 probabilistic methods is when you need to predict patterns using more granular buckets like daily (or even weekly) data.  This is because probabilistic models can simulate day of the week, week of the month, and month of the year patterns that are going to be lost with simpler techniques.  Have you ever tried to predict daily seasonality with a Winter’s model? Here is a hint: It’s not going to work and requires lots of engineering.

    Probabilistic methods also provide value beyond the baseline forecast because they generate scenarios to use in stress-testing inventory control models. This makes them more appropriate for assessing, say, how a change in reorder point will impact stockout probabilities, fill rates, and other KPIs. By simulating thousands of possible demands over many lead times (which are themselves presented in scenario form), you’ll have a much better idea of how your current and proposed stocking policies will perform. You can make better decisions on where to make targeted stock increases and decreases.

    So, don’t throw out the old for the new just yet. Just know when you need a hammer and when you need a jackhammer.

     

     

     

     

    Creating and Exploiting Probabilistic Forecasting Scenarios

    Probabilistic scenarios are sequences of data points generated to represent potential real-world situations. Unlike scenarios in war games or other simulations, these are synthetic time series used as inputs to system models or as intuition-builders for decision-makers.

    For instance, scenarios of future item demand can be fed into Monte Carlo simulation models of inventory control systems, thereby creating a virtual laboratory in which to explore the consequences of management decisions, such as changing reorder points and/or order quantities. In addition, plots of metrics like on-hand inventory or stockouts can help inventory planners deepen their “feel” for the randomness inherent in their operations.

    Figure 1 shows daily demand scenarios generated from a single observed demand series recorded over one year. Note that the same data generating process can “look quite different” in detail from sample to sample. This mimics real life.

    Creating and Exploiting Probabilistic Forecasting Scenarios Sequence 1

    Figure 1: An observed demand sequence and demand scenarios derived from it.

     

    Figure 2 shows two demand scenarios and their consequences for stock on hand in a particular inventory control system. The difference between the two inventory plots illustrates the degree to which randomness in demand dominates the problem. The top plot shows two episodes of stockout, while the bottom plot shows nine. Averaging over many scenarios will clarify the typical values of Key Performance Metrics (KPIs) such as the average number of stockouts associated with any choice of Reorder Point and Order Quantity (which are 10 and 25, respectively, in Figure 2.)

    Creating and Exploiting Probabilistic Forecasting Scenarios Sequence 2

    Figure 2: Two demand scenarios and their consequences for on-hand inventory

     

    In this note, we’ll describe techniques for creating scenarios and list criteria for evaluating scenario generators.

    Criteria for Scenarios

    As we’ll see below, there are several ways to create scenarios. No matter the source, what criteria define a “good” scenario? There are four main criteria: fidelity, variety, quantity, and cost. Fidelity summarizes how accurately a scenario imitates real-world situations. High fidelity means the scenarios mirror actual events closely, providing a solid foundation for analysis and decision-making. Variety describes the diversity of scenarios a generator can create. A versatile generator can simulate a wide range of potential situations, allowing for a thorough exploration of possibilities and risks. Quantity refers to how many scenarios a generator can produce. A generator that can create a large number of scenarios provides ample data for analysis. Cost considers both the computational and human resources required to produce the scenarios. An efficient scenario generator balances quality with resource usage, ensuring the effort is justified by the value and accuracy of the outcomes.

    Scenario Generation

    Again, think of a scenario as a time series. How are scenarios created?

    1. Geppetto’s Workshop: This approach involves hand-crafting scenarios manually by experts. While it can yield high fidelity (realism), it is very resource-intensive and cannot easily generate variety, which requires a large number of scenarios.
    2. Groundhog Day: This method involves repeatedly using a single real-world situation as input. While it’s realistic by definition and cost-effective (no resources are used beyond recording the data), this approach lacks variety and so cannot accurately reflect the diversity of real-world scenarios.
    3. Parametric Models: Examples of parametric models are the classics studied in Statistics 101 classes: the Normal, exponential, Poisson, etc. The demand plots in Figure 2 are generated parametrically, being the squares of Poisson random variables. These models generate an unlimited number of low cost scenarios having good variety, but they may not always capture the complexity of real-world data, potentially compromising fidelity. When reality is more complicated, these models generate over-simplified scenarios.
    4. Non-Parametric Time Series Bootstraps: This approach can score well on all criteria: fidelity, variety, quantity, and cost. It’s a versatile method that excels in creating massive numbers of realistic scenarios. The synthetic demand histories in Figure 1 are simple bootstrap samples based on the observed values in the top graph. (For some nitty-gritty details about generating scenarios, see the links below.)

    Exploiting Scenarios

    Scenarios prove their worth in two ways: As inputs to decision making and as intuition-builders. For instance, when demand scenarios are used as inputs to simulation models, they enable stress testing and performance estimation for system design. Scenarios can also serve as intuition-builders for decision-makers or system operators. Their visual representation aids in developing insight into and appreciation for the risks involved in making operational decisions, be they for demand forecasting or inventory management.

    Scenario-based analysis is very computer intensive, especially when the scenarios are generated by bootstrapping. At Smart Software, computation happens in the cloud. Imagine the computational load involved in determining reorder points and order quantities for each of tens of thousands of inventory items using hundreds or thousands of demand simulations for each item. Further imagine the software not only evaluating a specific proposed reorder point/order quantity pair but roaming over the entire “design space” of pairs to find the best pair of control parameters for each item. To make this practical, we take advantage of the parallel processing power of the cloud. Essentially, each inventory item is assigned its own computer to use in the calculations, so that all that computing can happen simultaneously rather than sequentially. Now we can cut loose and really get you the results you need.

    Learning More

    Those interested in further technical details and references can find more information here.

    What Makes a Probabilistic Forecast?

    Probabilistic Forecasting for Intermittent Demand

     

     

     

     

    A Rough Map of Forecasting-Related Terms

    People new to the jobs of “demand planner” or “supply planner” are likely to have questions about the various forecasting terms and methods used in their jobs. This note may help by explaining these terms and showing how they relate.

     

    Demand Planning

    Demand planning is about how much of what you have to sell will go out the door in the future, e.g., how many what-nots you will sell next quarter. Here are six methodologies often used in demand planning.

    • Statistical Forecasting
      • These methods use demand history to forecast future values. The two most common methods are curve fitting and data smoothing.
      • Curve fitting matches a simple mathematical function, like the equation for a straight line (y= a +b∙t) or an interest-rate type curve (y=a∙bt), to the demand history. Then it extends that line or curve forward in time as the forecast.
      • In contrast, data smoothing does not result in an equation. Instead it sweeps through the demand history, averaging values along the way, to create a smoother version of the history. These methods are called exponential smoothing and moving average. In the simplest case (i.e., in the absence of trend or seasonality, for which variants exist), the goal is to estimate the current average level of demand and use that as the forecast.
      • These methods produce “point forecasts”, which are single-number estimates for each future time period (e.g., “Sales in March will be 218 units”). Sometimes they come with estimates of potential forecast error bolted on using separate models of demand variability (“Sales in March will be 218 ± 120 units”).
    • Probabilistic Forecasting
      • This approach keys on the randomness of demand and works hard to estimate forecast uncertainty. It regards forecasting less as an exercise in cranking out specific numbers and more as an exercise in risk management.
      • It explicitly models the variability in demand and uses that to present results in the form of large numbers of scenarios constructed to show the full range of possible demand sequences. These are especially useful in tactical supply planning tasks, such as setting reorder points and order quantities.
    • Causal Forecasting
      • Statistical forecasting models use as inputs only the past demand history of the item in question. They regard the up-and-down wiggles in the demand plot as the end result of myriad unnamed factors (interest rates, the price of tea in China, phases of the moon, whatever). Causal forecasting explicitly identifies one or more influences (interest rates, advertising spend, competitors’ prices, …) that could plausibly influence sales. Then it builds an equation relating the numerical values of these “drivers” or “causal factors” to item sales. The equation’s coefficients are estimated by “regression analysis”.
    • Judgemental Forecasting
      • Golden Gut. Despite the general availability of gobs of data, some companies pay little attention to the numbers and give greater weight to the subjective judgements of an executive deemed to have a “Golden Gut”, which allows him or her to use “gut feel” to predict what future demand will be. If that person has great experience, has spent a career actually looking at the numbers, and is not prone to wishful thinking or other forms of cognitive bias, the Golden Gut can be a cheap, fast way to plan. But there is good evidence from studies of companies run this way that relying on the Golden Gut is risky.
      • Group Consensus. More common is a process that uses a periodic meeting to create a group consensus forecast. The group will have access to shared objective data and forecasts, but members will also have knowledge of factors that may not be measured well or at all, such as consumer sentiment or the stories relayed by sales reps. It is helpful to have a shared, objective starting point for these discussions consisting of some sort of objective statistical analysis. Then the group can consider adjusting the statistical forecast. This process anchors the forecast in objective reality but exploits all the other information available outside the forecasting database.
      • Scenario Generation. Sometimes several people will meet and discuss “strategic what-if” questions. “What if we lose our Australian customers?” “What if our new product roll-out is delayed by six months?” “What if our sales manager for the mid-west jumps to a competitor?” These bigger-picture questions can have implications for item-specific forecasts and might be added to any group-consensus forecasting meeting.
    • New product forecasting
      • New products, by definition, have no sales history to support statistical, probability, or causal forecasting. Subjective forecasting methods can always be used here, but these often rely on a dangerous ratio of hopes to facts. Fortunately, there is at least partial support for objective forecasting in the form of curve fitting.
      • A graph of the cumulative sales of an item often describes some sort of “S-curve”, i.e., a graph that starts at zero, builds up, then levels off to a final lifetime total sales. The curve gets its name because it looks like a letter S somehow smeared and stretched to the right. Now there are an infinite number of S-curves, so forecasters typically pick an equation and subjectively specify some key parameter values, like when sales will hit 25%, 50% and 75% of total lifetime sales and what that final level will be. This is also overtly subjective, but it produces detailed period-by-period forecasts that can be updated as experience builds up. Finally, S-curves are sometimes shaped to match the known history of a similar, predecessor product (“Sales for our last gizmo looked like this, so let’s use that as a template.”).

     

    Supply Planning

    Demand planning feeds into supply planning by predicting future sales (e.g., for finished goods) or usage (e.g., for spare parts). Then it is up to supply planning to make sure the items in question will be available to sell or to use.

    • Dependent demand
      • Dependent demand is demand that can be determined by its relationship to demand for another item. For instance, a bill of materials may show that a little red wagon consists of a body, a pull bar, four wheels, two axles, and various fasteners to keep the wheels on the axles and connect the pull bar to the body. So if you hope to sell 10 little red wagons, you’d better make 10, which means you need 10×2 = 20 axles, 10×4 = 40 wheels, etc. Dependent demand governs raw materials purchasing, component and subsystems purchasing, even personnel hiring (10 wagons need one high-school kid to put them together over a 1 hour shift).
      • If you have multiple products with partially overlapping bills of materials, you have a choice of two forecasting approaches. Suppose you sell not only little red wagons but little blue baby carriages and that both use the same axles. To predict the number of axles you need you could (1) predict the dependent demand for axles from each product and add the forecasts or (2) observe the total demand history for axles as its own time series and forecast that separately. Which works better is an empirical question that can be tested.
    • Inventory management
      • Inventory management entails many different tasks. These include setting inventory control parameters such as reorder points and order quantities, reacting to contingencies such as stockouts and order expediting, setting staffing levels, and selecting suppliers.
    • Forecasting plays a role in the first three. The number of replenishment orders that will be made in a year for each product determines how many people are needed to cut PO’s. The number and severity of stockouts in a year determines the number of contingencies that must be handled. The number of PO’s and stockouts in a year will be random but be governed by the choices of inventory control parameters. The implications of any such choices can be modeled by inventory simulations. These simulations will be driven by detailed demand scenarios generated by probabilistic forecasts.