Statistical Forecasting: How Automatic method selection works in Smart IP&O

Smart IP&O offers automated statistical forecasting that selects the right forecast method that best forecasts the data.  It does this for each time-series in the data set.  This blog will help a laymen understand how the forecast methods are chosen automatically.

Smart makes many methods available including single and double exponential smoothing, linear and simple moving average, and Winters models.  Each model is designed to capture a different sort of pattern.  The criteria to automatically choose one statistical method out of a set of choices is based on which method came closest to correctly predicting held-out history.

Earlier demand history is passed to each method and the result is compared to actuals to find the one that came closest overall.  That “winning” automatically chosen method is then fed all the history for that item to produce the forecast.

The overall nature of the demand pattern for the item is captured by holding out different portions of the history so that an occasional outlier does not unduly influence the choice of method.  You can visualize it using the below diagram where each row represents a 3-period forecast in held out history, based on different amounts of the red earlier history.  The variances of each pass are averaged together to determine the method’s overall ranking against all other methods.

Automatic Forecasting and Statistical Forecasting App

For most time series, this process can accurately capture trends, seasonality, and average volume accurately. But sometimes a chosen method comes mathematically closest to predicting the held-out history but doesn’t project it forward in a way that makes sense.

Users can correct this by using the system’s exception reports and filtering features to identify items that merit review.  They can then configure the automatic forecast methods that they wish to be considered for that item.

 

 

How much time should it take to compute statistical forecasts?
The top factors that impact the speed of your forecast engine 

How long should it take for a demand forecast to be computed using statistical methods?  This question is often asked by customers and prospects.  The answer truly depends.  Forecast results for a single item can be computed in the blink of an eye, in as little as a few hundredths of a second, but sometimes they may require as much as five seconds.  To understand the differences, it’s important to understand that there is more involved than grinding through the forecast arithmetic itself.   Here are six factors that influence the speed of your forecast engine.

1) Forecasting method.  Traditional time-series extrapolative techniques (such as exponential smoothing and moving average methods), when cleverly coded, are lighting fast.  For example, the Smart Forecast automatic forecasting engine that leverages these techniques and powers our demand planning and inventory optimization software can crank out statistical forecasts on 1,000 items in 1 second!  Extrapolative methods produce an expected forecast and a summary measure of forecast uncertainty. However, more complex models in our platform that generate probabilistic demand scenarios take much longer given the same computing resources.  This is partly because they create a much larger volume of output, usually thousands of plausible future demand sequences. More time, yes, but not time wasted, since these results are much more complete and form the basis for downstream optimization of inventory control parameters.

2) Computing resources.  The more resources you throw at the computation, the faster it will be.  However, resources cost money and it may not be economical to invest in these resources.  For example, to make certain types of machine learning-based forecasts work, the system will need to multi-thread computations across multiple servers to deliver results quickly.  So, make sure you understand the assumed compute resources and associated costs. Our computations happen on the Amazon Web Services cloud, so it is possible to pay for a great deal of parallel computation if desired.

3) Number of time-series.  Do you have to forecast only a few hundred items in a single location or many thousands of items across dozens of locations?  The greater the number of SKU x Location combinations, the greater the time required.  However, it is possible to trim the time to get demand forecasts by better demand classification.  For example, it is not important to forecast every single SKU x Location combination. Modern Demand Planning Software can first subset the data based on volume/frequency classifications before running the forecast engine.  We’ve observed situations where over one million SKU x Location combinations existed, but only ten percent had demand in the preceding twelve months.

4) Historical Bucketing.  Are you forecasting using daily, weekly, or monthly time buckets?  The more granular the bucketing, the more time it is going to take to compute statistical forecasts.  Many companies will wonder, “Why would anyone want to forecast on a daily basis?” However, state-of-the-art demand forecasting software can leverage daily data to detect simultaneous day-of-week and week-of-month patterns that would otherwise be obscured with traditional monthly demand buckets. And the speed of business continues to accelerate, threatening the competitive viability of the traditional monthly planning tempo.

5) Amount of History.  Are you limiting the model by only feeding it the most recent demand history, or are you feeding all available history to the demand forecasting software? The more history you feed the model, the more data must be analyzed and the longer it is going to take.

6) Additional analytical processing.  So far, we’ve imagined feeding items’ demand history in and getting forecasts out. But the process can also involve additional analytical steps that can improve results. Examples include:

a) Outlier detection and removal to minimize the distortion caused by one-off events like storm damage.

b) Machine learning that decides how much history should be used for each item by detecting regime change.

c) Causal modeling that identifies how changes in demand drivers (such as price, interest rate, customer sentiment, etc.) impact future demand.

d) Exception reporting that uses data analytics to identify unusual situations that merit further management review.

 

The Rest of the Story. It’s also critical to understand that the time to get an answer involves more than the speed of forecasting computations per se.  Data must be loaded into memory before computing can begin. Once the forecasts are computed, your browser must load the results so that they may be rendered on screen for you to interact with.  If you re-forecast a product, you may choose to save the results.  If you are working with product hierarchies (aggregating item forecasts up to product families, families up to product lines, etc.), the new forecast is going to impact the hierarchy, and everything must be reconciled.   All of this takes time.

Fast Enough for You? When you are evaluating software to see whether your need for speed will be satisfied, all of this can be tested as part of a proof of concept or trial offered by demand planning software solution providers.  Test it out, and make sure that the compute, load, and save times are acceptable given the volume of data and forecasting methods you want to use to support your process.

 

 

 

Do your statistical forecasts suffer from the wiggle effect?

 What is the wiggle effect? 

It’s when your statistical forecast incorrectly predicts the ups and downs observed in your demand history when there really isn’t a pattern.  It’s important to make sure your forecasts don’t wiggle unless there is a real pattern.

Here is a transcript from a recent customer where this issue was discussed:

Customer: “The forecast isn’t picking up on the patterns I see in the history.  Why not?” 

Smart:  “If you look closely, the ups and downs you see aren’t patterns.  It’s really noise.”  

Customer:  “But if we don’t predict the highs, we’ll stock out.”

Smart: “If the forecast were to ‘wiggle’ it would be much less accurate.  The system will forecast whatever pattern is evident, in this case a very slight uptrend.  We’ll buffer against the noise with safety stocks. The wiggles are used to set the safety stocks.”

Customer: “Ok. Makes sense now.” 

Do your statistical forecasts suffer from the wiggle effect graphic

The wiggle looks reassuring but, in this case, it is resulting in an incorrect demand forecast. The ups and downs aren’t really occurring at the same times each month.  A better statistical forecast is shown in light green.

 

 

Beyond the forecast – Collaboration and Consensus Planning

5 Steps to Consensus Demand Planning

The whole point of demand forecasting is to establish the best possible view of future demand.  This requires that we draw upon the best data and inputs we can get, leverage statistics to capture underlying patterns, put our heads together to apply overrides based on business knowledge, and agree on a consensus demand plan that serves as cornerstone to the company’s overall demand plan.

Step 1: Develop an accurate demand signal.   What constitutes demand?  Consider how  your organization defines demand – say, confirmed sales orders net of cancellations or shipment data adjusted to remove the impact of historical stockouts  – and use this consistently.  This is your measure of what the market is requesting you to deliver.  Don’t confuse this with your ability to deliver – that should be reflected in the revenue plan.

Step 2: Generate a statistical forecast.  Plan for thousands of items, using a proven forecasting application that automatically pulls in your data and reliably produces accurate forecasts for all of your items.  Review the first pass of your forecast, then make adjustments.  A strike or train wreck may have interrupted shipping last month – don’t let that wag your forecast.  Adjust for these and reforecast.  Do the best you can, then invite others to weigh in.

Step 3: Bring on the experts.  Product line managers, sales leaders, key distribution partners know their markets.  Share your forecast with them.  Smart uses the concept of a “Snapshot” to share a facsimile of your forecast – at any level, for any product line – with people who may know better.  There could be an enormous order that hasn’t hit the pipeline, or a channel partner is about to run their annual promotion.  Give them an easy way to take their portion of the forecast and change it.  Drag this month up, that one down …

Step 4:  Measure Accuracy and Forecast Value Add.  Some of your contributors may be right on the money, other tend to be biased high or low.  Use forecast vs. actuals reporting and measure forecast value add analysis to measure forecast errors and whether changes to the forecast are hurting or helping.  By informing the process with this information, your company will improve it’s ability to forecast more accurately.

Step 5: Agree on the Consensus Forecast.  You can do this one product line or geography at a time, or business by  business.  Convene the team, graphically stack up their inputs, review past accuracy performance, discuss their reasons for increasing or reducing the forecast, and agree on whose inputs to use.  This becomes your consensus plan.  Finalize the plan and send it off – upload forecasts to MRP, send to finance and manufacturing.  You have just kicked off your Sales, Inventory and Operational Planning process.

You can do this.  And we can help.  If you have any questions about collaborative demand planning please reply to this blog, we will follow up.

 

 

 

Why Days of Supply Targets Don’t Work when Computing Safety Stocks

Why Days of Supply Targets Don’t Work when Computing Safety Stocks

CFOs tell us they need to spend less on inventory without impacting sales.  One way to do that is to move away from using targeted day of supply to determine reorder points and safety stock buffers.   Here is how a days of supply model works:

  1. Compute average demand per day and multiply the demand per day by supplier lead time in days to get lead time demand
  2. Pick a days of supply buffer (i.e., 15, 30, 45 days, etc.). Use larger buffers being used for more important items and smaller buffers for less important items.
  3. Add the desired days of supply buffer to demand over the lead time to get the reorder point. Order more when on hand inventory falls below the reorder point

Here is what is wrong with this approach:

  1. The average doesn’t account for seasonality and trend – you’ll miss obvious patterns unless you spend lots of time manually adjusting for it.
  2. The average doesn’t consider how predictable an item is – you’ll overstock predictable items and understock less predictable ones. This is because the same days of supply for different items yields a very different stock out risk.
  3. The average doesn’t tell a planner how stock out risk is impacted by the level of inventory – you’ll have no idea whether you are understocked, overstocked, or have just enough. You are essentially planning with blinders on.

There are many other “rule of thumb” approaches that are equally problematic.  You can learn more about them in this post

A better way to plan the right amount of safety stock is to leverage probability models that identify exactly how much stock is needed given the risk of stock-out you are willing to accept.   Below is a screenshot of Smart Inventory Optimization that does exactly that.  First, it details the predicted service levels (probability of not stocking out) associated with the current days of supply logic.  The planner can now see the parts where predicted service level is too low or too costly.  They can then make immediate corrections by targeting the desired service levels and level of inventory investment. Without this information, a planner isn’t going to know whether the targeted days of safety stock is too much, too little, or just right resulting in overstocks and shortages that cost market share and revenue. 

Computing Safety Stocks 2