Finding Your Spot on the Tradeoff Curve

Balancing Act

Managing inventory, like managing anything, involves balancing competing priorities. Do you want a lean inventory? Yes! Do you want to be able to say “It’s in stock” when a customer wants to buy something? Yes!

But can you have it both ways? Only to a degree. If you lean into leaning your inventory too aggressively, you risk stockouts. If you stamp out stockouts, you create inventory bloat. You are forced to find a satisfactory balance between the two competing goals of lean inventory and high item availability.

Striking a Balance

How do you strike that balance? Too many inventory planners “guestimate” their way to some kind of answer. Or they work out a smart answer once and hope that it has a distant sell-by date and keep using it while they focus on other problems. Unfortunately, shifts in demand and/or changes in supplier performance and/or shifts in your own company’s priorities will obsolete old inventory plans and put you right back where you started.

It is inevitable that every plan has a shelf life and has to be updated. However, it is definitely not best practice to replace one guess with another. Instead, each planning cycle should exploit modern supply chain software to replace guesswork with fact-based analysis using probability math.

Know Thyself

The one thing that software cannot do is compute a best answer without knowing your priorities. How much do you prioritize lean inventory over item availability? Software will predict the levels of inventory and availability caused by any decisions you make about how to manage each item in your inventory, but only you can decide whether any given set of key performance indicators is consistent with what you want.

Knowing what you want in a general sense is easy: you want it all. But knowing what you prefer when comparing specific scenarios is more difficult. It helps to be able to see a range of realizable possibilities and mull over which seems best when they are laid out side by side.

See What’s Next

Supply chain software can give you a view of the tradeoff curve. You know in general that lean inventory and high item availability trade off against each other, but seeing item-specific tradeoff curves sharpens your focus.

Why is there a curve? Because you have choices about how to manage each item. For instance, if you check inventory status continuously, what values will you assign to the Min and Max values that govern when to order replenishments and how much to order. The tradeoff curve arises because choosing different Min and Max values leads to different levels of on hand inventory and different levels of item availability, e.g., as measured by fill rate.

 

A Scenario for Analysis

To illustrate these ideas, I used a digital twin  to estimate how various values of Min and Max would perform in a particular scenario. The scenario focused on a notional spare part with purely random demand having a moderately high level of intermittency (37% of days having zero demand). Replenishment lead times were a coin flip between 7 and 14 days. The Min and Max values were systematically varied: Min from 20 to 40 units, Max from Min+1 units to 2xMin units. Each (Min,Max) pair was simulated for 365 days of operation a total of 1,000 times, then the results averaged to estimate both the average number of on hand units and the fill rate, i.e., percentage of daily demands that were satisfied immediately from stock. If stock was not available, it was backordered.

 

Results

The experiment produced two types of results:

  • Plots showing the relationship between Min and Max values and two key performance indicators: Fill rate and average units on hand.
  • A tradeoff curve showing how the fill rate and units on hand trade off against each other.

Figure 1 plots on hand inventory as a function of the values of Min and Max. The experiment yielded on hand levels ranging from near 0 to about 40 units.  In general, keeping Min constant and increasing Max results in more units on hand. The relationship with Min is more complex: keeping Max constant,  increasing Min first adds to inventory but at some point reduces it.

Figure 2 plots fill rate as a function of the values of Min and Max.  The experiment yielded fill rate levels ranging from near 0% to 100%.  In general, the functional relationships between the fill rate and the values of Min and Max mirrored those in Figure1.

Figure 3 makes the key point, showing how varying Min and Max produces a perverse pairing of the key performance indicators. Generally speaking, the values of Min and Max that maximize item availability (fill rate)  are the same values that maximize inventory cost (average units on hand). This general pattern is represented by the blue curve. The experiments also produced some offshoots from the blue curve that are associated with poor choices of Min and Max, in the sense that other choices dominate them by producing the same fill rate with lower inventory.

 

Conclusions

Figure 3 makes clear that your choice of how to manage an inventory item forces you to trade off inventory cost and item availability. You can avoid some inefficient combinations of Min and Max values, but you cannot escape the tradeoff.

The good side of this reality is that you do not have to guess what will happen if you change your current values of Min and Max to something else. The software will tell you what that move will buy you and what it will cost you. You can take off your Guestimator hat and do your thing with confidence.

Figure 1 On Hand Inventory as a function of Min and Max values

Figure 1 On Hand Inventory as a function of Min and Max values

 

 

Figure 2 Fill Rate as a function of Min and Max values

Figure 2 Fill Rate as a function of Min and Max values

 

 

Figure 3 Tradeoff curve between Fill Rate and On Hand Inventory

Figure 3 Tradeoff curve between Fill Rate and On Hand Inventory

 

 

 

Direct to the Brain of the Boss – Inventory Analytics and Reporting

I’ll start with a confession: I’m an algorithm guy. My heart lives in the “engine room” of our software, where lightning-fast calculations zip back and forth across the AWS cloud, generating demand and supply scenarios used to guide important decisions about demand forecasting and inventory management.

But I recognize that the target of all that beautiful, furious calculation is the brain of the boss, the person responsible for making sure that customer demand is satisfied in the most efficient and profitable way. So, this blog is about Smart Operational Analytics (SOA), which creates reports for management. Or, as they are called in the military, sit-reps.

All the calculations guided by the planners using our software ultimately get distilled into the SOA reports for management. The reports focus on five areas: inventory analysis, inventory performance, inventory trending, supplier performance, and demand anomalies.

Inventory Analysis

These reports keep tabs on current inventory levels and identify areas that need improvement. The focus is on current inventory counts and their status (on hand, in transit, in quarantine), inventory turns, and excesses vs shortages.

Inventory Performance

These reports track Key Performance Indicators (KPIs) such as Fill Rates, Service Levels, and inventory Costs. The analytic calculations elsewhere in the software guide you toward achieving your KPI targets by calculating Key Performance Predictions (KPPs) based on recommended settings for, e.g., reorder points and order quantities. But sometimes surprises occur, or operating policies are not executed as recommended, so there will always be some slippage between KPPs and KPIs.

Inventory Trending

Knowing where things stand today is important, but seeing where things are trending is also valuable. These reports reveal trends in item demand, stockout events, average days on hand, average time to ship, and more.

Supplier Performance

Your company cannot perform at its best if your suppliers are dragging you down. These reports monitor supplier performance in terms of the accuracy and promptness of filling replenishment orders. Where you have multiple suppliers for the same item, they let you compare them.

Demand Anomalies

Your entire inventory system is demand driven, and all inventory control parameters are computed after modeling item demand. So if something odd is happening on the demand side, you must be vigilant and prepare to recalculate things like mins and maxes for items that are starting to act in odd ways.

Summary

The end point for all the massive calculations in our software is the dashboard showing management what’s going on, what’s next, and where to focus attention. Smart Inventory Analytics is the part of our software ecosystem aimed at your company’s C-Suite.

 Smart Reporting Studio Inventory Management Supply Software

Figure 1: Some sample reports in graphical form

 

You Need to Team up with the Algorithms

Over forty years ago, Smart Software consisted of three friends working to start a company in a church basement. Today, our team has expanded to operate from multiple locations across Massachusetts, New Hampshire and Texas, with team members in England, Spain, Armenia and India. Like many of you in your jobs,  we have found ways to make distributed teams work for us and for you.

This note is about a different kind of teamwork: the collaboration between you and our software that happens at your fingertips. I often write about the software itself and what goes on “under the hood”. This time, my subject is how you should best team up with the software.

Our software suite, Smart Inventory Planning and Optimization (Smart IP&O™) is capable of massively detailed calculations of future demand and the inventory control parameters (e.g., reorder points and order quantities) that would most effectively manage that demand. But your input is required to make the most of all that power. You need to team up with the algorithms.

That interaction can take several forms. You can start by simply assessing how you are doing now. The report writing functions in Smart IP&O (Smart Operational Analytics™) can collate and analyze all your transactional data to measure your Key Performance Indicators (KPIs), both financial (e.g., inventory investment) and operational (e.g., fill rates).

The next step might be to use SIO (Smart Inventory Optimization™), the inventory analytics within SIP&O, to play “what-if” games with the software. For example, you might ask “What if we reduced the order quantity on item 1234 from 50 to 40?” The software grinds the numbers to let you know how that would play out, then you react. This can be useful, but what if you have 50,000 items to consider? You would want to do what-if games for a few critical items, but not all of them.

The real power comes with using the automatic optimization capability in SIO. Here you can team with the algorithms at scale. Using your business judgement, you can create “groups”, i.e., collections of items that share some critical features. For example, you might create a group for “critical spare parts for electric utility customers” consisting of 1,200 parts. Then again calling on your business judgement, you could specify what item availability standard should apply to all the items in that group (e.g., “at least 95% chance of not stocking out in a year”). Now the software can take over and automatically work out the best reorder points and order quantities for every one of those items to achieve your required item availability at the lowest possible total cost. And that, dear reader, is powerful teamwork.

 

 

Rethinking forecast accuracy: A shift from accuracy to error metrics

Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.

Accuracy is represented as a percentage between zero and 100, while error percentages start at zero but have no upper limit. Reports of MAPE (mean absolute percent error) or other error metrics can be titled “forecast accuracy” reports, which blurs the distinction.  So, you may want to know how to convert from the error viewpoint to the accuracy viewpoint that your company espouses.  This blog describes how with some examples.

Accuracy metrics are computed such that when the actual equals the forecast then the accuracy is 100% and when the forecast is either double or half of the actual, then accuracy is 0%. Reports that compare the forecast to the actual often include the following:

  • The Actual
  • The Forecast
  • Unit Error = Forecast – Actual
  • Absolute Error = Absolute Value of Unit Error
  • Absolute % Error = Abs Error / Actual, as a %
  • Accuracy % = 100% – Absolute % Error

Look at a couple examples that illustrate the difference in the approaches. Say the Actual = 8 and the forecast is 10.

Unit Error is 10 – 8 = 2

Absolute % Error = 2 / 8, as a % = 0.25 * 100 = 25%

Accuracy = 100% – 25% = 75%.

Now let’s say the actual is 8 and the forecast is 24.

Unit Error is 24– 8 = 16

Absolute % Error = 16 / 8 as a % = 2 * 100 = 200%

Accuracy = 100% – 200% = negative is set to 0%.

In the first example, accuracy measurements provide the same information as error measurements since the forecast and actual are already relatively close. But when the error is more than double the actual, accuracy measurements bottom out at zero. It does correctly indicate the forecast was not at all accurate. But the second example is more accurate than a third, where the actual is 8 and the forecast is 200. That’s a distinction a 0 to 100% range of accuracy doesn’t register. In this final example:

Unit Error is 200 – 8 = 192

Absolute % Error = 192 / 8, as a % = 24 * 100 = 2,400%

Accuracy = 100% – 2,400% = negative is set to 0%.

Error metrics continue to provide information on how far the forecast is from the actual and arguably better represent forecast accuracy.

We encourage adopting the error viewpoint. You simply hope for a small error percentage to indicate the forecast was not far from the actual, instead of hoping for a large accuracy percentage to indicate the forecast was close to the actual.  This shift in mindset offers the same insights while eliminating distortions.

 

 

 

 

Every Forecasting Model is Good for What it is Designed for

​When you should use traditional extrapolative forecasting techniques.

With so much hype around new Machine Learning (ML) and probabilistic forecasting methods, the traditional “extrapolative” or “time series” statistical forecasting methods seem to be getting the cold shoulder.  However, it is worth remembering that these traditional techniques (such as single and double exponential smoothing, linear and simple moving averaging, and Winters models for seasonal items) often work quite well for higher volume data. Every method is good for what it was designed to do.  Just apply each appropriately, as in don’t bring a knife to a gunfight and don’t use a jackhammer when a simple hand hammer will do. 

Extrapolative methods perform well when demand has high volume and is not too granular (i.e., demand is bucketed monthly or quarterly). They are also very fast and do not use as many computing resources as probabilistic and ML methods. This makes them very accessible.

Are the traditional methods as accurate as newer forecasting methods?  Smart has found that extrapolative methods do very poorly when demand is intermittent. However, when demand is higher volume, they only do slightly worse than our new probabilistic methods when demand is bucketed monthly.  Given their accessibility, speed, and the fact you are going to apply forecast overrides based on business knowledge, the baseline accuracy difference here will not be material.

The advantage of more advanced models like Smart’s GEN2 probabilistic methods is when you need to predict patterns using more granular buckets like daily (or even weekly) data.  This is because probabilistic models can simulate day of the week, week of the month, and month of the year patterns that are going to be lost with simpler techniques.  Have you ever tried to predict daily seasonality with a Winter’s model? Here is a hint: It’s not going to work and requires lots of engineering.

Probabilistic methods also provide value beyond the baseline forecast because they generate scenarios to use in stress-testing inventory control models. This makes them more appropriate for assessing, say, how a change in reorder point will impact stockout probabilities, fill rates, and other KPIs. By simulating thousands of possible demands over many lead times (which are themselves presented in scenario form), you’ll have a much better idea of how your current and proposed stocking policies will perform. You can make better decisions on where to make targeted stock increases and decreases.

So, don’t throw out the old for the new just yet. Just know when you need a hammer and when you need a jackhammer.