How much time should it take to compute statistical forecasts?
The top factors that impact the speed of your forecast engine 

How long should it take for a demand forecast to be computed using statistical methods?  This question is often asked by customers and prospects.  The answer truly depends.  Forecast results for a single item can be computed in the blink of an eye, in as little as a few hundredths of a second, but sometimes they may require as much as five seconds.  To understand the differences, it’s important to understand that there is more involved than grinding through the forecast arithmetic itself.   Here are six factors that influence the speed of your forecast engine.

1) Forecasting method.  Traditional time-series extrapolative techniques (such as exponential smoothing and moving average methods), when cleverly coded, are lighting fast.  For example, the Smart Forecast automatic forecasting engine that leverages these techniques and powers our demand planning and inventory optimization software can crank out statistical forecasts on 1,000 items in 1 second!  Extrapolative methods produce an expected forecast and a summary measure of forecast uncertainty. However, more complex models in our platform that generate probabilistic demand scenarios take much longer given the same computing resources.  This is partly because they create a much larger volume of output, usually thousands of plausible future demand sequences. More time, yes, but not time wasted, since these results are much more complete and form the basis for downstream optimization of inventory control parameters.

2) Computing resources.  The more resources you throw at the computation, the faster it will be.  However, resources cost money and it may not be economical to invest in these resources.  For example, to make certain types of machine learning-based forecasts work, the system will need to multi-thread computations across multiple servers to deliver results quickly.  So, make sure you understand the assumed compute resources and associated costs. Our computations happen on the Amazon Web Services cloud, so it is possible to pay for a great deal of parallel computation if desired.

3) Number of time-series.  Do you have to forecast only a few hundred items in a single location or many thousands of items across dozens of locations?  The greater the number of SKU x Location combinations, the greater the time required.  However, it is possible to trim the time to get demand forecasts by better demand classification.  For example, it is not important to forecast every single SKU x Location combination. Modern Demand Planning Software can first subset the data based on volume/frequency classifications before running the forecast engine.  We’ve observed situations where over one million SKU x Location combinations existed, but only ten percent had demand in the preceding twelve months.

4) Historical Bucketing.  Are you forecasting using daily, weekly, or monthly time buckets?  The more granular the bucketing, the more time it is going to take to compute statistical forecasts.  Many companies will wonder, “Why would anyone want to forecast on a daily basis?” However, state-of-the-art demand forecasting software can leverage daily data to detect simultaneous day-of-week and week-of-month patterns that would otherwise be obscured with traditional monthly demand buckets. And the speed of business continues to accelerate, threatening the competitive viability of the traditional monthly planning tempo.

5) Amount of History.  Are you limiting the model by only feeding it the most recent demand history, or are you feeding all available history to the demand forecasting software? The more history you feed the model, the more data must be analyzed and the longer it is going to take.

6) Additional analytical processing.  So far, we’ve imagined feeding items’ demand history in and getting forecasts out. But the process can also involve additional analytical steps that can improve results. Examples include:

a) Outlier detection and removal to minimize the distortion caused by one-off events like storm damage.

b) Machine learning that decides how much history should be used for each item by detecting regime change.

c) Causal modeling that identifies how changes in demand drivers (such as price, interest rate, customer sentiment, etc.) impact future demand.

d) Exception reporting that uses data analytics to identify unusual situations that merit further management review.

 

The Rest of the Story. It’s also critical to understand that the time to get an answer involves more than the speed of forecasting computations per se.  Data must be loaded into memory before computing can begin. Once the forecasts are computed, your browser must load the results so that they may be rendered on screen for you to interact with.  If you re-forecast a product, you may choose to save the results.  If you are working with product hierarchies (aggregating item forecasts up to product families, families up to product lines, etc.), the new forecast is going to impact the hierarchy, and everything must be reconciled.   All of this takes time.

Fast Enough for You? When you are evaluating software to see whether your need for speed will be satisfied, all of this can be tested as part of a proof of concept or trial offered by demand planning software solution providers.  Test it out, and make sure that the compute, load, and save times are acceptable given the volume of data and forecasting methods you want to use to support your process.

 

 

 

6 Do’s and Don’ts for Spare Parts Planning

Managing spare parts inventories can feel impossible. You don’t know what will break and when. Feedback from mechanical departments and maintenance teams is often inaccurate. Planned maintenance schedules are often shifted around, making them anything but “planned.”   Usage (i.e., demand) patterns are most often extremely intermittent, i.e., demand jumps randomly between zero and something else, often a surprisingly big number. Intermittency, combined with the lack of significant trend or seasonal patterns, render traditional time-series forecasting methods inaccurate. The large number of part-by-locations combinations makes it impossible to manually create or even review forecasts for individual parts.   Given all these challenges, we thought it would be helpful to outline a number of do’s (and their associated don’ts).

  1. Do use probabilistic methods to compute a reorder points and Min/Max levels
    Basing stocking decisions on average daily usage isn’t the right answer. Nor is reliance on traditional forecasting methods like exponential smoothing models. Neither approach works when demand is intermittent because they don’t take proper account of demand volatility. Probabilistic methods that simulate thousands of possible demand scenarios work best. They provide a realistic estimate of the demand distribution and can handle all the zeros and random non-zeros. This will ensure the inventory level is right-sized to hit whatever service level target you choose.
     
  2. Do use service levels instead of rule-of-thumb methods to determine stocking levels
    Many parts planning organizations rely on multiples of daily demand and other rules of thumb to determine stocking policies. For example, reorder points are often based on doubling average demand over the lead time or applying some other multiple depending on the importance of the item. However, averages don’t account for how volatile (or noisy) a part is and will lead to overstocking less noisy parts and understocking more noisy parts.
     
  3. Do frequently recompute stocking policies
    Just because demand is intermittent doesn’t mean nothing changes over time. Yet after interviewing hundreds of companies managing spare parts inventory, we find that fewer than 10% recompute stocking policies monthly. Many never recompute stocking policies until there is a “problem.” Across thousands of parts, usage is guaranteed to drift up or down on at least some of the parts. Supplier lead times can also change. Using an outdated reorder point will cause orders to trigger too soon or too late, creating lots of problems. Recomputing policies every planning cycle ensures inventory will be right-sized. Don’t be reactive and wait for a problem to occur before considering whether the Min or Max should be modified. By then it’s too late – it’s like waiting for your brakes to fail before making a repair. Don’t worry about the effort of recomputing Min/Max values for large numbers of SKU’s: modern software does it automatically. Remember: Recalibration of your stocking policies is preventive maintenance against stockout!
     
  4. Do get buy-in on targeted service levels
    Inventory is expensive and should be right-sized based on striking a balance between the organization’s willingness to stock out and its willingness to budget for spares. Too often, planners make decisions in isolation based on pain avoidance or maintenance technicians’ requests without consideration of how spending on one part impacts the organization’s ability to spend on another part. Excess inventory on one part hurts service levels on other parts by disproportionally consuming the inventory budget. Make sure that service level goals and associated inventory costs of achieving the service levels are understood and agreed to.
     
  5. Do run a separate planning process for repairable parts
    Some parts are very expensive to replace, so it is preferable to send them to repair facilities or back to the OEM for repair. Accounting for the supply side randomness of when repairable parts will be returned, and knowing whether to wait for a repair or to purchase an additional spare, are critical to ensuring item availability without inventory bloat. This requires specialized reporting and the use of probabilistic models.  Don’t treat repairable parts like consumable parts when planning.
     
  6. Do count what is purchased against the budget – not just what is consumed
    Many organizations will allocate total part purchases to a separate corporate budget and ding the mechanical or maintenance team’s budget for parts that are used. In most MRO organizations, especially in public transit and utilities, the repair teams dictate what is purchased. If what is purchased doesn’t count against their budget, they will over-buy to ensure there is never any chance of stockout. They have literally zero incentive to get it right, so tens of millions in excess inventory will be purchased. If what is purchased is reflected in the budget, far more attention will be paid to purchasing only what is truly needed. Recognizing that excess inventory hurts service by robbing the organization of cash that could otherwise be used on understocked parts is an important step to ensuring responsible inventory purchasing.

Spare Parts Planning Software solutions

Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

 

 

White Paper: What you Need to know about Forecasting and Planning Service Parts

 

This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

 

    Do your statistical forecasts suffer from the wiggle effect?

     What is the wiggle effect? 

    It’s when your statistical forecast incorrectly predicts the ups and downs observed in your demand history when there really isn’t a pattern.  It’s important to make sure your forecasts don’t wiggle unless there is a real pattern.

    Here is a transcript from a recent customer where this issue was discussed:

    Customer: “The forecast isn’t picking up on the patterns I see in the history.  Why not?” 

    Smart:  “If you look closely, the ups and downs you see aren’t patterns.  It’s really noise.”  

    Customer:  “But if we don’t predict the highs, we’ll stock out.”

    Smart: “If the forecast were to ‘wiggle’ it would be much less accurate.  The system will forecast whatever pattern is evident, in this case a very slight uptrend.  We’ll buffer against the noise with safety stocks. The wiggles are used to set the safety stocks.”

    Customer: “Ok. Makes sense now.” 

    Do your statistical forecasts suffer from the wiggle effect graphic

    The wiggle looks reassuring but, in this case, it is resulting in an incorrect demand forecast. The ups and downs aren’t really occurring at the same times each month.  A better statistical forecast is shown in light green.

     

     

    How to Handle Statistical Forecasts of Zero

    A statistical forecast of zero can cause lots of confusion for forecasters, especially when the historical demand is non-zero.  Sure, it’s obvious that demand is trending downward, but should it trend to zero?  When the older demand is much greater than the more recent demand and the more recent demand is very low volume (i.e., 1,2,3 units demanded), the answer is, statistically speaking, yes.  However, this might not jive with the planner’s business knowledge and expected minimum level of demand.  So, what should a forecaster do to correct this? Here are three suggestions:

     

    1. Limit the historical data fed to the model. In a down trending situation, the older data is often much greater than the recent data.   When the older much higher volume demand is ignored, the down trend won’t be nearly as significant.  You’ll still forecast a down trend, but results are more likely to be line with business expectations.
    1. Try trend dampening. Smart Demand Planner has a feature called “trend hedging” that enables users to define how a trend should phase out over time. The higher the percentage trend hedge (0-100%), the more pronounced the trend dampening.  This means that a forecasted trend will not continue through the whole forecast horizon.  This means the demand forecast will start to flatten before it hits zero on a downtrend.
    1. Change the forecast model. Switch from a trending method like Double Exponential Smoothing or Linear Moving Average to a non-trending method such as Single Exponential Smoothing or Simple Moving Average. You won’t forecast a downtrend, but at least your forecast won’t be zero and thus more likely to be accepted by the business.

     

     

     

    Why Days of Supply Targets Don’t Work when Computing Safety Stocks

    Why Days of Supply Targets Don’t Work when Computing Safety Stocks

    CFOs tell us they need to spend less on inventory without impacting sales.  One way to do that is to move away from using targeted day of supply to determine reorder points and safety stock buffers.   Here is how a days of supply model works:

    1. Compute average demand per day and multiply the demand per day by supplier lead time in days to get lead time demand
    2. Pick a days of supply buffer (i.e., 15, 30, 45 days, etc.). Use larger buffers being used for more important items and smaller buffers for less important items.
    3. Add the desired days of supply buffer to demand over the lead time to get the reorder point. Order more when on hand inventory falls below the reorder point

    Here is what is wrong with this approach:

    1. The average doesn’t account for seasonality and trend – you’ll miss obvious patterns unless you spend lots of time manually adjusting for it.
    2. The average doesn’t consider how predictable an item is – you’ll overstock predictable items and understock less predictable ones. This is because the same days of supply for different items yields a very different stock out risk.
    3. The average doesn’t tell a planner how stock out risk is impacted by the level of inventory – you’ll have no idea whether you are understocked, overstocked, or have just enough. You are essentially planning with blinders on.

    There are many other “rule of thumb” approaches that are equally problematic.  You can learn more about them in this post

    A better way to plan the right amount of safety stock is to leverage probability models that identify exactly how much stock is needed given the risk of stock-out you are willing to accept.   Below is a screenshot of Smart Inventory Optimization that does exactly that.  First, it details the predicted service levels (probability of not stocking out) associated with the current days of supply logic.  The planner can now see the parts where predicted service level is too low or too costly.  They can then make immediate corrections by targeting the desired service levels and level of inventory investment. Without this information, a planner isn’t going to know whether the targeted days of safety stock is too much, too little, or just right resulting in overstocks and shortages that cost market share and revenue. 

    Computing Safety Stocks 2