Everybody forecasts to drive inventory planning. It’s just a question of how.

Reveal how forecasts are used with these 4 questions.

Often companies will insist that they “don’t use forecasts” to plan inventory.  They often use reorder point methods and are struggling to improve on-time delivery, inventory turns, and other KPIs. While they don’t think of what they are doing as explicitly forecasting, they certainly use estimates of future demand to develop reorder points such as min/max.

Regardless of what it is called, everyone tries to estimate future demand in some way and uses this estimate to set stocking policies and drive orders. To improve inventory planning and make sure you aren’t over/under ordering and creating large stockouts and inventory bloat, it is important to understand exactly how your organization uses forecasts. Once this is understood, you can assess whether the quality of the forecasts can be improved.

Try getting answers to the following questions. It will reveal how forecasts are being used in your business – even if you don’t think you use forecasts.

1.  Is your forecast a period-by-period estimate over time that is used to predict what on-hand inventory will be in the future and triggers order suggestions in your ERP system?

2. Or is your forecast used to derive a reorder point but not explicitly used as a per-period driver to trigger orders? Here, I may predict we’ll sell 10 per week based on the history, but we are not loading 10, 10, 10, 10, etc., into the ERP. Instead, I derive a reorder point or Min that covers the two-period lead time + some amount of buffer to help protect against stock out. In this case, I’ll order more when on hand gets to 25.

3. Is your forecast used as a guide for the planner to help subjectively determine when they should order more?  Here, I predict 10 per week, and I assess the on-hand inventory periodically, review the expected lead time, and I decide, given the 40 units I have on hand today, that I have “enough.” So, I do nothing now but will check back again in a week.

4. Is it used to set up blanket orders with suppliers? Here, I predict 10 per week and agree to a blanket purchase order with the supplier of 520 per year. The orders are then placed in advance to arrive in quantities of 10 once per week until the blanket order is consumed.

Once you get the answers, you can then ask how the estimates of demand are created.  Is it an average? Is it deriving demand over lead time from a sales forecast?  Is there a statistical forecast generated somewhere?  What methods are considered? It will also be important to assess how safety stocks are used to protect against demand and supply variability.  More on all of this in a future article.

 

Elephants and Kangaroos ERP vs. Best of Breed Demand Planning

“Despite what you’ve seen in your Saturday morning cartoons, elephants can’t jump, and there’s one simple reason: They don’t have to. Most jumpy animals—your kangaroos, monkeys, and frogs—do it primarily to get away from predators.”  — Patrick Monahan, Science.org, Jan 27, 2016.

Now you know why the largest ERP companies can’t develop high quality best-of-breed like solutions. They never had to, so they never evolved to innovate outside of their core focus. 

However, as ERP systems have become commoditized, gaps in their functionality became impossible to ignore. The larger players sought to protect their share of customer wallet by promising to develop innovative add-on applications to fill all the white spaces.  But without that “innovation muscle,” many projects failed, and mountains of technical debt accumulated.

Best-of-breed companies evolved to innovate and have deep functional expertise in specific verticals.  The result is that best of breed ERP add-ons are easier to use, have more features, and deliver more value than the native ERP modules they replace. 

If your ERP provider has already partnered with an innovative best of breed add-on provider*, you’re all set! But if you can only get the basics from your ERP, go with a best-of-breed add-on that has a bespoke integration to the ERP. 

A great place to start your search is to look for ERP demand planning add-ons that add brains to the ERP’s brawn, i.e., those that support inventory optimization and demand forecasting.  Leverage add-on tools like Smart’s statistical forecasting, demand planning, and inventory optimization apps to develop forecasts and stocking policies that are fed back to the ERP system to drive daily ordering. 

*App-stores are a license for the best of breed to sell into the ERP companies base –  being listed  partnerships.

 

 

 

 

What Silicon Valley Bank Can Learn from Supply Chain Planning

​If you had your head up lately, you may have noticed some additional madness off the basketball court: The failure of Silicon Valley Bank. Those of us in the supply chain world may have dismissed the bank failure as somebody else’s problem, but that sorry episode holds a big lesson for us, too: The importance of stress testing done right.

The Washington Post recently carried an opinion piece by Natasha Sarin called “Regulators missed Silicon Valley Bank’s problems for months. Here’s why.” Sarin outlined the flaws in the stress testing regime imposed on the bank by the Federal Reserve. One problem is that the stress tests are too static. The Fed’s stress factor for nominal GDP growth was a single scenario listing presumed values over the next 13 quarters (see Figure 1). Those 13 quarterly projections might be somebody’s consensus view of what a bad hair day would look like, but that’s not the only way things could play out.  As a society, we are being taught to appreciate a better way to display contingencies every time the National Weather Service shows us projected hurricane tracks (see Figure 2). Each scenario represented by a different colored line shows a possible storm path, with the concentrated lines representing the most likely.  By exposing the lower probability paths, risk planning is improved.

When stress testing the supply chain, we need realistic scenarios of possible future demands that might occur, even extreme demands.   Smart provides this in our software (with considerable improvements in our Gen2 methods).  The software generates a huge number of credible demand scenarios, enough to expose the full scope of risks (see Figure 3). Stress testing is all about generating massive numbers of planning scenarios, and Smart’s probabilistic methods are a radical departure from previous deterministic S&OP applications, being entirely scenario based.

The other flaw in the Fed’s stress tests was that they were designed months in advance but never updated for changing conditions.  Demand planners and inventory managers intuitively appreciate that key variables like item demand and supplier lead time are not only highly random even when things are stable but also subject to abrupt shifts that should require rapid rewriting of planning scenarios (see Figure 4, where the average demand jumps up dramatically between observations 19 and 20). Smart’s Gen2 products include new tech for detecting such “regime changes”  and automatically changing scenarios accordingly.

Banks are forced to undergo stress tests, however flawed they may be, to protect their depositors. Supply chain professionals now have a way to protect their supply chains by using modern software to stress test their demand plans and inventory management decisions.

1 Scenarios used the Fed to stress test banks Software

Figure 1: Scenarios used the Fed to stress test banks.

 

2 Scenarios used by the National Weather Service to predict hurricane tracks

Figure 2: Scenarios used by the National Weather Service to predict hurricane tracks

 

3 Demand scenarios of the type generated by Smart Demand Planner

Figure 3: Demand scenarios of the type generated by Smart Demand Planner

 

4 Example of regime change in product demand after observation #19

Figure 4: Example of regime change in product demand after observation #19

 

 

Is your demand planning and forecasting process a black box?

There’s one thing I’m reminded of almost every day at Smart Software that puzzle me: most companies do not understand how forecasts are created, and stocking policies are determined.  It’s an organizational black box. Here is an example from a recent sales call:

How do you forecast?
We use history.

How do you use history?
What do you mean?

Well, you can take an average of the last year, last two years, average the most recent periods, or use some other type of formula to generate the forecast.
I’m pretty sure we use an average of the last 12 months.

Why 12 months instead of a different amount of history?
12 months is a good amount of time to use because it doesn’t get skewed by older data but it’s recent enough

How do you know it’s more accurate than using 18 months or some other length of history?
We don’t know. We do adjust the forecasts based on feedback from sales.  

Do you know if the adjustments make things more accurate or less than if you just used the average?
We don’t know but are confident that forecasts are inflated

What do the inventory buyers do then if they think the numbers are inflated?
They have lots of business knowledge and adjust their buys accordingly

So, is it fair to say they would ignore the forecasts at least some of the time?
Yes, some of the time.

How do the buyers decide when to order more? Do you have a reorder point or safety stock specified in your ERP system that helps guide these decisions?
Yes, we use a safety stock field.

How is safety stock calculated?
Buyers determine this based on the importance of the item, lead times, and other considerations such as how many customers purchase the item, the velocity of the item, it’s cost.  They’ll carry different amounts of safety stock depending on this.

The discussion continued. The main takeaway here is that when you scratch just below the surface, far more questions are revealed than answers.  This often means that the inventory planning and demand forecast process is highly subjective, varies from planner to planner, is not well understood by the rest of the organization, and likely to be reactive.  As Tom Willemain has described it’s “chaos masked by improvisation.”   The “as-is” process needs to be fully identified and documented.  Only then can gaps be exposed, and improvements can be made.   Here is a list of 10 questions  you can ask that will reveal your organization’s true forecasting, demand planning, and inventory planning process.

 

 

 

 

 

What to do when a statistical forecast doesn’t make sense

Sometimes a statistical forecast just doesn’t make sense.  Every forecaster has been there.  They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history.   When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process.

This blog will help a layman understand what the Smart statistical models are and how they are chosen automatically.  It will address how that choice sometimes fails, how you can know if it did, and what you can do to ensure that the forecasts can always be justified.  It’s important to know to expect, and how to catch the exceptions so you can rely on your forecasting system.

 

How methods are chosen automatically

The criteria to automatically choose one statistical method out of a set is based on which method came closest to correctly predicting held-out history.  Earlier history is passed to each method and the result is compared to actuals to find the one that came closest overall.  That automatically chosen method is then fed all the history to produce the forecast. Check out this blog to learn more about the model selection https://smartcorp.com/uncategorized/statistical-forecasting-how-automatic-method-selection-works/

For most time series, this process can capture trends, seasonality, and average volume accurately. But sometimes a chosen method comes mathematically closest to predicting the held-out history but doesn’t project it forward in a way that makes sense.  That means the system selected method isn’t best and for some “hard to forecast”

 

Hard to forecast items

Hard to forecast items may have large, unpredictable spikes in demand, or typically no demand but random irregular blips, or unusual recent activity.  Noise in the data sometimes randomly wanders up or down, and the automated best-pick method might forecast a runaway trend or a grind into zero.  It will do worse than common sense and in a small percentage of any reasonably varied group of items.  So, you will need to identify these cases and respond by overriding the forecast or changing the forecast inputs.

 

How to find the exceptions

Best practice is to filter or sort the forecasted items to identify those where the sum of the forecast over the next year is significantly different than the corresponding history last year.  The forecast sum may be much lower than the history or vice versa.  Use supplied metrics to identify these items; then you can choose to apply overrides to the forecast or modify the forecast settings.

 

How to fix the exceptions

Often when the forecast seems odd, an averaging method, like Single Exponential Smoothing or even a simple average using Freestyle, will produce a more reasonable forecast.  If trend is possibly valid, you can remove only seasonal methods to avoid a falsely seasonal result.  Or do the opposite and use only seasonal methods if seasonality is expected but wasn’t projected in the default forecast.  You can use the what-if features to create any number of forecasts, evaluate & compare, and continue to fine tune the settings until you are comfortable with the forecast.

Cleaning the history, with or without changing the automatic method selection, is also effective at producing reasonable forecasts. You can embed forecast parameters to reduce the amount of history used to forecast those items or the number of periods passed into the algorithm so earlier, outdated history is no longer considered.  You can edit spikes or drops in the demand history that are known anomalies so they don’t influence the outcome.  You can also work with the Smart team to implement automatic outlier detection and removal so that data prior to being forecasted is already cleansed of these anomalies.

If the demand is truly intermittent, it is going to be nearly impossible to forecast “accurately” per period. If a level-loading average is not acceptable, handling the item by setting inventory policy with a lead time forecast can be effective.  Alternatively, you may choose to use “same as last year” models which while not prone to accuracy will be generally accepted by the business given the alternatives forecasts.

Finally, if the item was introduced so recently that the algorithms do not have enough input to accurately forecast, a simple average or manual forecast may be best.  You can identify new items by filtering on the number of historical periods.

 

Manual selection of methods

Once you have identified rows where the forecast doesn’t make sense to the human eye, you can choose a smaller subset of all methods to allow into the forecast run and compare to history.  Smart will allow you to use a restricted set of methods just for one forecast run or embed the restricted set to use for all forecast runs going forward. Different methods will project the history into the future in different ways.  Having a sense of how each works will help you choose which to allow.

 

Rely on your forecasting tool

The more you use Smart period over period to embed your decisions about how to forecast and what historical data to consider, the less often you will face exceptions as described in this blog.  Entering forecast parameters is a manageable task when starting with critical or high impact items.  Even if you don’t embed any manual decisions on forecast methods, the forecast re-runs every period with new data. So, an item with an odd result today can become easily forecastable in time.