Top Differences Between Inventory Planning for Finished Goods and for MRO and Spare Parts

What’s different about inventory planning for Maintenance, Repair, and Operations (MRO) compared to inventory planning in manufacturing and distribution environments? In short, it’s the nature of the demand patterns combined with the lack of actionable business knowledge.

Demand Patterns

Manufacturers and distributors tend to focus on the top sellers that generate the majority of their revenue. These items typically have high demand that is relatively easy to forecast with traditional time series models that capitalize on predictable trend and/or seasonality.  In contrast, MRO planners almost always deal with intermittent demand, which is more sparse, more random, and harder to forecast.  Furthermore, the fundamental quantities of interest are different. MRO planners ultimately care most about the “when” question:  When will something break? Whereas the others focus on the “how much” question of units sold.

 

Business Knowledge

Manufacturing and distribution planners can often count on gathering customer and sales feedback, which can be combined with statistical methods to improve forecast accuracy. On the other hand, bearings, gears, consumable parts, and repairable parts are rarely willing to share their opinions. With MRO, business knowledge about which parts will be needed and when just isn’t reliable (excepting planned maintenance when higher-volume consumable parts are replaced). So, MRO inventory planning success goes only as far as their probability models’ ability to predict future usage takes them. And since demand is so intermittent, they can’t get past Go with traditional approaches.

 

Methods for MRO

In practice, it is common for MRO and asset-intensive businesses to manage inventories by resorting to static Min/Max levels based on subjective multiples of average usage, supplemented by occasional manual overrides based on gut feel. The process becomes a bad mixture of static and reactive, with the result that a lot of time and money is wasted on expediting.

There are alternative planning methods based more on math and data, though this style of planning is less common in MRO than in the other domains. There are two leading approaches to modeling part and machine breakdown: models based on reliability theory and “condition-based maintenance” models based on real-time monitoring.

 

Reliability Models

Reliability models are the simpler of the two and require less data. They assume that all items of the same type, say a certain spare part, are statistically equivalent. Their key component is a “hazard function”, which describes the risk of failure in the next little interval of time. The hazard function can be translated into something better suited for decision making: the “survival function”, which is the probability that the item is still working after X amount of use (where X might be expressed in days, months, miles, uses, etc.). Figure 1 shows a constant hazard function and its corresponding survival function.

 

MRO and Spare Parts function and its survival function

Figure 1: Constant hazard function and its survival function

 

A hazard function that doesn’t change implies that only random accidents will cause a failure. In contrast, a hazard function that increases over time implies that the item is wearing out. And a decreasing hazard function implies that an item is settling in. Figure 2 shows an increasing hazard function and its corresponding survival function.

 

MRO and Spare Parts Increasing hazard function and survival function

Figure 2: Increasing hazard function and its survival function

 

Reliability models are often used for inexpensive parts, such as mechanical fasteners, whose replacement may be neither difficult nor expensive (but still might be essential).

 

Condition-Based Maintenance

Models based on real-time monitoring are used to support condition-based maintenance (CBM) for expensive items like jet engines. These models use data from sensors embedded in the items themselves. Such data are usually complex and proprietary, as are the probability models supported by the data. The payoff from real-time monitoring is that you can see trouble coming, i.e., the deterioration is made visible, and forecasts can predict when the item will hit its red line and therefore need to be taken off the field of play. This allows individualized, pro-active maintenance or replacement of the item.

Figure 3 illustrates the kind of data used in CBM. Each time the system is used, there is a contribution to its cumulative wear and tear. (However, note that sometimes use can improve the condition of the unit, as when rain helps keep a piece of machinery cool). You can see the general trend upward toward a red line after which the unit will require maintenance. You can extrapolate the cumulative wear to estimate when it will hit the red line and plan accordingly.

 

MRO and Spare Parts real-time monitoring for condition-based maintenance

Figure 3: Illustrating real-time monitoring for condition-based maintenance

 

To my knowledge, nobody makes such models of their finished goods customers to predict when and how much they will next order, perhaps because the customers would object to wearing brain monitors all the time. But CBM, with its complex monitoring and modeling, is gaining in popularity for can’t-fail systems like jet engines. Meanwhile, classical reliability models still have a lot of value for managing large fleets of cheaper but still essential items.

 

Smart’s approach
The above condition-based maintenance and reliability approaches require an excessive data collection and cleansing burden that many MRO companies are unable to manage. For those companies, Smart offers an approach that does not require development of reliability models. Instead, it exploits usage data in a different way. It leverages probability-based models of both usage and supplier lead times to simulate thousands of possible scenarios for replenishment lead times and demand.  The result is an accurate distribution of demand and lead times for each consumable part that can be exploited to determine optimal stocking parameters.   Figure 4 shows a simulation that begins with a scenario for spare part demand (upper plot) then produces a scenario of on-hand supply for particular choices of Min/Max values (lower line). Key Performance Indicators (KPIs) can be estimated by averaging the results of many such simulations.

MRO and Spare Parts simulation of demand and on-hand inventory

Figure 4: An example of a simulation of spare part demand and on-hand inventory

You can read about Smart’s approach to forecasting spare parts here: https://smartcorp.com/wp-content/uploads/2019/10/Probabilistic-Forecasting-for-Intermittent-Demand.pdf

 

 

Spare Parts Planning Software solutions

Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

 

 

White Paper: What you Need to know about Forecasting and Planning Service Parts

 

This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

 

    How Are We Doing? KPI’s and KPP’s

    Dealing with the day-to-day of inventory management can keep you busy. There’s the usual rhythm of ordering, receiving, forecasting and planning, and moving things around in the warehouse. Then there are the frenetic times – shortages, expedites, last-minute calls to find new suppliers.

    All this activity works against taking a moment to see how you’re doing. But you know you have to get your head up now and then to see where you’re heading. For that, your inventory software should show you metrics – and not just one, but a full set of metrics or KPI’s – Key Performance Indicators.

    Multiple Metrics

    Depending on your role in your organization, different metrics will have different salience. If you are on the finance side of the house, inventory investment may be top of mind: how much cash is tied up in inventory? If you’re on the sales side, item availability may be top of mind: what’s the chance that I can say “yes” to an order? If you’re responsible for replenishment, how many PO’s will your people have to cut in the next quarter?

    Availability Metrics

    Let’s circle back to item availability. How do you put a number on that? The two most used availability metrics are “service level” and “fill rate.” What’s the difference? It’s the difference between saying “We had an earthquake yesterday” and saying, “We had an earthquake yesterday, and it was a 6.4 on the Richter scale.” Service level records the frequency of stockouts no matter their size; fill rate reflects their severity. The two can seem to point in opposite directions, which causes some confusion. You can have a good service level, say 90%, but have an embarrassing fill rate, say 50%. Or vice versa. What makes them different is the distribution of demand sizes. For instance, if the distribution is very skewed, so most demands are small but some are huge, you might get the 90%/50% split mentioned above. If your focus is on how often you have to backorder, service level is more relevant. If your worry is how big an overnight expedite can get, the fill rate is more relevant.

    One Graph to Rule them All

    A graph of on-hand inventory can provide the basis for calculating multiple KPI’s. Consider Figure 1, which plots on-hand each day for a year. This plot has information needed to calculate multiple metrics: inventory investment, service level, fill rate, reorder rate and other metrics.

    Key performace indicators and paramenters for inventory management

    Inventory investment: The average height of the graph when above zero, when multiplied by unit cost of the inventory item, gives quarterly dollar value.

    Service level: The fraction of inventory cycles that end above zero is the service level. Inventory cycles are marked by the up movements occasioned by the arrival of replenishment orders.

    Fill rate: The amount by which inventory drops below zero and how long it stays there combine to determine fill rate.

    In this case, the average number of units on hand was 10.74, the service level was 54%, and the fill rate was 91%.

     

    KPI’s and KPP’s

    In the over forty years since we founded Smart Software, I have never seen a customer produce a plot like Figure 1.  Those who are further along in their development do produce and pay attention to reports listing their KPI’s in tabular form, but they don’t look at such a graph. Nevertheless, that graph has value for developing insight into the random rhythms of inventory as it rises and falls.

    Where it is especially useful is prospectively. Given market volatility, key variables like supplier lead times, average demand, and demand variability all shift over time. This implies that key control parameters like reorder points and order quantities must adjust to these shifts. For instance, if a supplier says they’ll have to increase their average lead time by 2 days, this will impact your metrics negatively, and you may need to increase your reorder point to compensate. But increase it by how much?

    Here is where modern inventory software comes in. It will let you propose an adjustment and then see how things will play out. Plots like Figure 1 let you see and get a feel for the new regime. And the plots can be analyzed to compute KPP’s – Key Performance Predictions.

    KPP’s help take the guesswork out of adjustments. You can simulate what will happen to your KPI’s if you change them in response to changes in your operating environment – and how bad things will get if you make no changes.

     

     

     

     

    Confused about AI and Machine Learning?

    Are you confused about what is AI and what is machine learning? Are you unsure why knowing more will help you with your job in inventory planning? Don’t despair. You’ll be ok, and we’ll show you how some of whatever-it-is can be useful.

    What is and what isn’t

    What is AI and how does it differ from ML? Well, what does anybody do these days when they want to know something? They Google it. And when they do, the confusion starts.

    One source says that the neural net methodology called deep learning is a subset of machine learning, which is a subset of AI. But another source says that deep learning is already a part of AI because it sort of mimics the way the human mind works, while machine learning doesn’t try to do that.

    One source says there are two types of machine learning: supervised and unsupervised. Another says there are four: supervised, unsupervised, semi-supervised and reinforcement.

    Some say reinforcement learning is machine learning; others call it AI.

    Some of us traditionalists call a lot of it “statistics”, though not all of it is.

    In the naming of methods, there is a lot of room for both emotion and salesmanship. If a software vendor thinks you want to hear the phrase “AI”, they may well say it for you just to make you happy.

    Better to focus on what comes out at the end

    You can avoid some confusing hype if you focus on the end result you get from some analytic technology, regardless of its label. There are several analytical tasks that are relevant to inventory planners and demand planners. These include clustering, anomaly detection, regime change detection, and regression analysis. All four methods are usually, but not always, classified as machine learning methods. But their algorithms can come straight out of classical statistics.

    Clustering

    Clustering means grouping together things that are similar and distancing them from things that are dissimilar. Sometimes clustering is easy: to separate your customers geographically, simply sort them by state or sales region. When the problem is not so dead obvious, you can use data and clustering algorithms to get the job done automatically even when dealing with massive datasets.

    For example, Figure 1 illustrates a cluster of “demand profiles”, which in this case divides all a customer’s items into nine clusters based on the shape of their cumulative demand curves. Cluster 1.1 in the top left contains items whose demand has been petering out, while Cluster 3.1 in the bottom left contains items whose demand has accelerated.  Clustering can also be done on suppliers. The choice of number of clusters is typically left to user judgement, but ML can guide that choice.  For example, a user might instruct the software to “break my parts into 4 clusters” but using ML may reveal that there are really 6 distinct clusters the user should analyze. 

     

    Confused about AI and Machine Learning Inventory Planning

    Figure 1: Clustering items based on the shapes of their cumulative demand

    Anomaly Detection

    Demand forecasting is traditionally done using time series extrapolation. For instance, simple exponential smoothing works to find the “middle” of the demand distribution at any time and project that level forward. However, if there has been a sudden, one-time jump up or down in demand in the recent past, that anomalous value can have a significant but unwelcome effect on the near-term forecast.  Just as serious for inventory planning, the anomaly can have an outsized effect on the estimate of demand variability, which goes directly to the calculation of safety stock requirements.

    Planners may prefer to find and remove such anomalies (and maybe do offline follow-up to find out the reason for the weirdness). But nobody with a big job to do will want to visually scan thousands of demand plots to spot outliers, expunge them from the demand history, then recalculate everything. Human intelligence could do that, but human patience would soon fail. Anomaly detection algorithms could do the work automatically using relatively straightforward statistical methods. You could call this “artificial intelligence” if you wish.

    Regime Change Detection

    Regime change detection is like the big brother of anomaly detection. Regime change is a sustained, rather than temporary, shift in one or more aspects of the character of a time series. While anomaly detection usually focuses on sudden shifts in mean demand, regime change could involve shifts in other features of the demand, such as its volatility or its distributional shape.  

    Figure 2 illustrates an extreme example of regime change. The bottom dropped out of demand for this item around day 120. Inventory control policies and demand forecasts based on the older data would be wildly off base at the end of the demand history.

    Confused about AI and Machine Learning Demand Planning

    Figure 2: An example of extreme regime change in an item with intermittent demand

    Here too, statistical algorithms can be developed to solve this problem, and it would be fair play to call them “machine learning” or “artificial intelligence” if so motivated.  Using ML or AI to identify regime changes in demand history enables demand planning software to automatically use only the relevant history when forecasting instead of having to manually pick the amount of history to introduce to the model. 

    Regression analysis

    Regression analysis relates one variable to another through an equation. For example, sales of window frames in one month may be predicted from building permits issued a few months earlier. Regression analysis has been considered a part of statistics for over a century, but we can say it is “machine learning” since an algorithm works out the precise way to convert knowledge of one variable into a prediction of the value of another.

    Summary

    It is reasonable to be interested in what’s going on in the areas of machine learning and artificial intelligence. While the attention given to ChatGPT and its competitors is interesting, it is not relevant to the numerical side of demand planning or inventory management. The numerical aspects of ML and AI are potentially relevant, but you should try to see through the cloud of hype surrounding these methods and focus on what they can do.  If you can get the job done with classical statistical methods, you might just do that, then exercise your option to stick the ML label on anything that moves.

     

     

    How to Forecast Inventory Requirements

    Forecasting inventory requirements is a specialized variant of forecasting that focuses on the high end of the range of possible future demand.

    For simplicity, consider the problem of forecasting inventory requirements for just one period ahead, say one day ahead. Usually, the forecasting job is to estimate the most likely or average level of product demand. However, if available inventory equals the average demand, there is about a 50% chance that demand will exceed inventory and result in lost sales and/or lost good will. Setting the inventory level at, say, ten times the average demand will probably eliminate the problem of stockouts, but will just as surely result in bloated inventory costs.

    The trick of inventory optimization is to find a satisfactory balance between having enough inventory to meet most demand without tying up too many resources in the process. Usually, the solution is a blend of business judgment and statistics. The judgmental part is to define an acceptable inventory service level, such as meeting 95% of demand immediately from stock. The statistical part is to estimate the 95th percentile of demand.

    When not dealing with intermittent demand, you can often estimate the required inventory level by assuming a bell-shaped (Normal) curve of demand, estimating both the middle and the width of the bell curve, then using a standard statistical formula to estimate the desired percentile. The difference between the desired inventory level and the average level of demand is called the “safety stock” because it protects against the possibility of stockouts.

    When dealing with intermittent demand, the bell-shaped curve is a very poor approximation to the statistical distribution of demand. In this special case, Smart leverages patented technology for intermittent demand that is designed to accurately forecast the ranges and produce a better estimate of the safety stock needed to achieve the required inventory service level.

     

    A Gentle Introduction to Two Advanced Techniques: Statistical Bootstrapping and Monte Carlo Simulation

    Summary

    Smart Software’s advanced supply chain analytics exploits multiple advanced methods. Two of the most important are “statistical bootstrapping” and “Monte Carlo simulation”. Since both involve lots of random numbers flying around, folks sometimes get confused about which is which and what they are good for. Hence, this note. Bottom line up front: Statistical bootstrapping generates demand scenarios for forecasting. Monte Carlo simulation uses the scenarios for inventory optimization.

    Bootstrapping

    Bootstrapping, also called “resampling” is a method of computational statistics that we use to create demand scenarios for forecasting. The essence of the forecasting problem is to expose possible futures that your company might confront so you can work out how to manage business risks. Traditional forecasting methods focus on computing “most likely” futures, but they fall short of presenting the full risk picture. Bootstrapping provides an unlimited number of realistic what-if scenarios.

    Bootstrapping does this without making unrealistic assumptions about the demand, i.e., that it is not intermittent, or that it has a bell-shaped distribution of sizes. Those assumptions are crutches to make the math simpler, but the bootstrap is a procedure,  not an equation, so it doesn’t need such simplifications.

    For the simplest demand type, which is a stable randomness with no seasonality or trend, bootstrapping is dead easy. To get a reasonable idea of what a single future demand value might be, pick one of the historical demands at random. To create a demand scenario, make multiple random selections from the past and string them together. Done. It is possible to add a little more realism by “jittering” the demand values, i.e., adding or subtracting a bit of additional randomness to each one, but even that is simple.

    Figure 1 shows a simple bootstrap. The first line is a short sequence of historical demand for an SKU. The following lines show scenarios of future demand created by randomly selecting values from the demand history. For instance, the next three demand might be (0, 14, 6), or (2, 3, 5), etc.

    Statistical Bootstrapping and Monte Carlo Simulation 1

    Figure 1: Example of demand scenarios generated by a simple bootstrap

     

    Higher frequency operations such as daily forecasting bring with them more complex demand patterns, such as double seasonality (e.g., day-of-week and month-of-year) and/or trend. This challenged us to invent a new generation of bootstrapping algorithms. We recently won a US Patent for this breakthrough, but the essence is as described above.

    Monte Carlo Simulation

    Monte Carlo is famous for its casinos, which, like bootstrapping, invoke the idea of randomness. Monte Carlo methods go back a long way, but the modern impetus came with the need to do some hairy calculations about where neutrons would fly when an A-bomb explodes.

    The essence of Monte Carlo analysis is this: “Our problem is too complicated to analyze with paper-and-pencil equations. So, let’s write a computer program that codes the individual steps of the process, put in the random elements (e.g., which way a neutron shoots away), wind it up and watch it go. Since there’s a lot of randomness, let’s run the program a zillion times and average the results.”

    Applying this approach to inventory management, we have a different set of randomly occurring events: e.g., a demand of a given size arrives on a random day, a replenishment of a given size arrives after a random lead time, we cut a replenishment PO of a given size when stock drops to or below a given reorder point. We code the logic relating these events into a program. We feed it with a random demand sequence (see bootstrapping above), run the program for a while, say one year of daily operations, compute performance metrics like Fill Rate and Average On Hand inventory, and “toss the dice” by re-running the program many times and averaging the results of many simulated years. The result is a good estimate of what happens when we make key management decisions: “If we set the reorder point at 10 units and the order quantity at 15 units, we can expect to get a service level of 89% and an average on hand of 21 units.” What the simulation is doing for us is exposing the consequences of management decisions based on realistic demand scenarios and solid math. The guesswork is gone.

    Figure 2 shows some of the inner workings of a Monte Carlo simulation of an inventory system in four panels. The system uses a Min/Max inventory control policy with Min=10 and Max=25. No backorders are allowed: you have the good or you lose the business. Replenishment lead times are usually 7 days but sometimes 14. This simulation ran for one year.

    The first panel shows a complex random demand scenario in which there is no demand on weekends, but demand generally increases each day from Monday to Friday. The second panel shows the random number of units on hand, which ebbs and flows with each replenishment cycle. The third panel shows the random sizes and timings of replenishment orders coming in from the supplier. The final panel shows the unsatisfied demand that jeopardizes customer relationships. This kind of detail can be very useful for building insight into the dynamics of an inventory system.

    Statistical Bootstrapping and Monte Carlo Simulation 2

    Figure 2: Details of a Monte Carlo simulation

     

    Figure 2 shows only one of the countless ways that the year could play out. Generally, we want to average the results of many simulated years. After all, nobody would flip a coin once to decide if it were a fair coin. Figure 3 shows how four key performance metrics (KPI’s) vary from year to year for this system. Some metrics are relatively stable across simulations (Fill Rate), but others show more relative variability (Operating Cost= Holding Cost + Ordering Cost + Shortage Cost). Eyeballing the plots, we can estimate that the choices of Min=10, Max=25 leads to an average Operating cost of around $3,000 per year, a Fill Rate of around 90%, a Service Level of around 75%, and an Average On Hand of about 10

    Statistical Bootstrapping and Monte Carlo Simulation 3

    Figure 3: Variation in KPI’s computed over 1,000 simulated years

     

    In fact, it is now possible to answer a higher level of management question. We can go beyond “What will happen if I do such-and-such?” to “What is the best thing I can do to achieve a fill rate of at least 90% for this item at the lowest possible cost?” The mathemagic  behind this leap is yet another key technology called “stochastic optimization”, but we’ll stop here for now. Suffice it to say that Smart’s SIO&P software can search the “design space” of Min and Max values to automatically find the best choice.