Extend Epicor BisTrack with Smart IP&O’s Dynamic Reorder Point Planning & Forecasting

In this article, we will review the “suggested orders” functionality in Epicor BisTrack, explain its limitations, and summarize how Smart Inventory Planning & Optimization (Smart IP&O) can help reduce inventory & minimize stock-outs by accurately assessing the tradeoffs between stockout risks and inventory costs.

Automating Replenishment in Epicor BisTrack
Epicor BisTrack’s “Suggested Ordering” can manage replenishment by suggesting what to order and when via reorder point-based policies such as min-max and/or manually specified weeks of supply. BisTrack contains some basic functionality to compute these parameters based on average usage or sales, supplier lead time, and/or user-defined seasonal adjustments. Alternatively, reorder points can be specified completely manually. BisTrack will then present the user with a list of suggested orders by reconciling incoming supply, current on hand, outgoing demand, and stocking policies.

How Epicor BisTrack “Suggested Ordering” Works
To get a list of suggested orders, users specify the methods behind the suggestions, including locations for which to place orders and how to determine the inventory policies that govern when a suggestion is made and in what quantity.

Extend Epicor BisTrack Planning and Forecasting

First, the “method” field is specified from the following options to determine what kind of suggestion is generated and for which location(s):

Purchase – Generate purchase order recommendations.

  1. Centralized for all branches – Generates suggestions for a single location that buys for all other locations.
  2. By individual branch – Generates suggestions for multiple locations (vendors would ship directly to each branch).
  3. By source branch – Generates suggestions for a source branch that will transfer material to branches that it services (“hub and spoke”).
  4. Individual branches with transfers – Generates suggestions for an individual branch that will transfer material to branches that it services (“hub and spoke”, where the “hub” does not need to be a source branch).

Manufacture – Generate work order suggestions for manufactured goods.

  1. By manufacture branch.
  2. By individual branch.

Transfer from source branch – Generate transfer suggestions from a given branch to other branches.

Extend Epicor BisTrack Planning and Forecasting 2222

Next, the “suggest order to” is specified from the following options:

  1. Minimum – Suggests orders “up to” the minimum on hand quantity (“min”). For any item where supply is less than the min, BisTrack will suggest an order suggestion to replenish up to this quantity.
  2. Maximum when less than min – Suggests orders “up to” a maximum on-hand quantity when the minimum on-hand quantity is breached (e.g. a min-max inventory policy).
  1. Based on cover (usage) – Suggests orders based on coverage for a user-defined number of weeks of supply with respect to a specified lead time. Given internal usage as demand, BisTrack will recommend orders where supply is less than the desired coverage to cover the difference.
  1. Based on over (sales) – Suggests orders based on coverage for a user-defined number of weeks of supply with respect to a specified lead time. Given sales orders as demand, BisTrack will recommend orders where supply is less than the desired coverage to cover the difference.
  1. Maximum only – Suggests orders “up to” a maximum on-hand quantity where supply is less than this max.

Finally, if allowing BisTrack to determine the reorder thresholds, users can specify additional inventory coverage as buffer stock, lead times, how many months of historical demand to consider, and can also manually define period-by-period weighting schemes to approximate seasonality. The user will be handed a list of suggested orders based on the defined criteria. A buyer can then generate POs for suppliers with the click of a button.

Extend Epicor BisTrack Planning and Forecasting

Limitations

Rule-of-thumb Methods

While BisTrack enables organizations to generate reorder points automatically, these methods rely on simple averages that do not capture seasonality, trends, or the volatility in an item’s demand. Averages will always lag behind these patterns and are unable to pick up on trends. Consider a highly seasonal product like a snow shovel—if we take an average of Summer/Fall demand as we approach the Winter season instead of looking ahead, then the recommendations will be based on the slower periods instead of anticipating upcoming demand. Even if we consider an entire years’ worth of history or more, the recommendations will overcompensate during the slower months and underestimate the busy season without manual intervention.

Rule of thumb methods also fail when used to buffer against supply and demand variability.  For example, the average demand over the lead time might be 20 units.  However, a planner would often want to stock more than 20 units to avoid stocking out if lead times are longer than expected or demand is higher than the average.  BisTrack allows users to specify the reorder points based on multiples of the averages.  However, because the multiples don’t account for the level of predictability and variability in the demand, you’ll always overstock predictable items and understock unpredictable ones.   Read this article to learn more about why multiples of the average fail when it comes to developing the right reorder point.

Manual Entry
Speaking of seasonality referenced earlier, BisTrack does allow the user to approximate it through the use of manually entered “weights” for each period. This forces the user to have to decide what that seasonal pattern looks like—for every item. Even beyond that, the user must dictate how many extra weeks of supply to carry to buffer against stockouts, and must specify what lead time to plan around. Is 2 weeks extra supply enough? Is 3 enough? Or is that too much? There is no way to know without guessing, and what makes sense for one item might not be the right approach for all items.

Intermittent Demand
Many BisTrack customers may consider certain items “unforecastable” because of the intermittent or “lumpy” nature of their demand. In other words, items that are characterized by sporadic demand, large spikes in demand, and periods of little or no demand at all. Traditional methods—and rule-of-thumb approaches especially—won’t work for these kinds of items. For example, 2 extra weeks of supply for a highly predictable, stable item might be way too much; for an item with highly volatile demand, this same rule might not be enough. Without a reliable way to objectively assess this volatility for each item, buyers are left guessing when to buy and how much.

Reverting to Spreadsheets
The reality is most BisTrack users tend to do the bulk of their planning off-line, in Excel. Spreadsheets aren’t purpose-built for forecasting and inventory optimization. Users will often bake in user-defined rule of thumb methods that often do more harm than good.  Once calculated, users must input the information back into BisTrack manually. The time consuming nature of the process leads companies to infrequently compute their inventory policies – Many months and on occasion years go by in between mass updates leading to a “set it and forget it” reactive approach, where the only time a buyer/planner reviews inventory policy is at the time of order.  When policies are reviewed after the order point is already breached, it is too late.  When the order point is deemed too high, manual interrogation is required to review history, calculate forecasts, assess buffer positions, and to recalibrate.  The sheer volume of orders means that buyers will just release orders rather than take the painstaking time to review everything, leading to significant excess stock.  If the reorder point is too low, it’s already too late.  An expedite may now be required, driving up costs, assuming the customer doesn’t simply go elsewhere.

Epicor is Smarter
Epicor has partnered with Smart Software and offers Smart IP&O as a cross platform add-on to its ERP solutions including BisTrack, a speciality ERP for the Lumber, hardware, and building material industry.  The Smart IP&O solution comes complete with a bidirectional integration to BisTrack.  This enables Epicor customers to leverage built-for-purpose best of breed inventory optimization applications.  With Epicor Smart IP&O you can generate forecasts that capture trend and seasonality without manual configurations.  You will be able to automatically recalibrate inventory policies using field proven, cutting-edge statistical and probabilistic models that were engineered to accurately plan for intermittent demand.   Safety stocks will accurately account for demand and supply variability, business conditions, and priorities.  You can leverage service level driven planning so you have just enough stock or turn on optimization methods that prescribe the most profitable stocking policies and service levels that consider the real cost of carrying inventory. You can support commodity buys with accurate demand forecasting over longer horizons, and run “what-if” scenarios to assess alternative strategies before execution of the plan.

Smart IP&O customers routinely realize 7 figure annual returns from reduced expedites, increased sales, and less excess stock, all the while gaining a competitive edge by differentiating themselves on improved customer service. To see a recorded webinar hosted by the Epicor Users Group that profiles Smart’s Demand Planning and Inventory Optimization platform, please register here.

 

 

 

 

Rethinking forecast accuracy: A shift from accuracy to error metrics

Measuring the accuracy of forecasts is an undeniably important part of the demand planning process. This forecasting scorecard could be built based on one of two contrasting viewpoints for computing metrics. The error viewpoint asks, “how far was the forecast from the actual?” The accuracy viewpoint asks, “how close was the forecast to the actual?” Both are valid, but error metrics provide more information.

Accuracy is represented as a percentage between zero and 100, while error percentages start at zero but have no upper limit. Reports of MAPE (mean absolute percent error) or other error metrics can be titled “forecast accuracy” reports, which blurs the distinction.  So, you may want to know how to convert from the error viewpoint to the accuracy viewpoint that your company espouses.  This blog describes how with some examples.

Accuracy metrics are computed such that when the actual equals the forecast then the accuracy is 100% and when the forecast is either double or half of the actual, then accuracy is 0%. Reports that compare the forecast to the actual often include the following:

  • The Actual
  • The Forecast
  • Unit Error = Forecast – Actual
  • Absolute Error = Absolute Value of Unit Error
  • Absolute % Error = Abs Error / Actual, as a %
  • Accuracy % = 100% – Absolute % Error

Look at a couple examples that illustrate the difference in the approaches. Say the Actual = 8 and the forecast is 10.

Unit Error is 10 – 8 = 2

Absolute % Error = 2 / 8, as a % = 0.25 * 100 = 25%

Accuracy = 100% – 25% = 75%.

Now let’s say the actual is 8 and the forecast is 24.

Unit Error is 24– 8 = 16

Absolute % Error = 16 / 8 as a % = 2 * 100 = 200%

Accuracy = 100% – 200% = negative is set to 0%.

In the first example, accuracy measurements provide the same information as error measurements since the forecast and actual are already relatively close. But when the error is more than double the actual, accuracy measurements bottom out at zero. It does correctly indicate the forecast was not at all accurate. But the second example is more accurate than a third, where the actual is 8 and the forecast is 200. That’s a distinction a 0 to 100% range of accuracy doesn’t register. In this final example:

Unit Error is 200 – 8 = 192

Absolute % Error = 192 / 8, as a % = 24 * 100 = 2,400%

Accuracy = 100% – 2,400% = negative is set to 0%.

Error metrics continue to provide information on how far the forecast is from the actual and arguably better represent forecast accuracy.

We encourage adopting the error viewpoint. You simply hope for a small error percentage to indicate the forecast was not far from the actual, instead of hoping for a large accuracy percentage to indicate the forecast was close to the actual.  This shift in mindset offers the same insights while eliminating distortions.

 

 

 

 

The Automatic Forecasting Feature

Automatic forecasting is the most popular and most used feature of SmartForecasts and Smart Demand Planner. Creating Automatic forecasts is easy. But, the simplicity of Automatic Forecasting masks a powerful interaction of a number of highly effective methods of forecasting. In this blog, we discuss some of the theory behind this core feature. We focus on Automatic forecasting, in part because of its popularity and in part because many other forecasting methods produce similar outputs. Knowledge of Automatic forecasting immediately carries over to Simple Moving Average, Linear Moving Average, Single Exponential Smoothing, Double Exponential Smoothing, Winters’ Exponential Smoothing, and Promo forecasting.

 

Forecasting tournament

Automatic forecasting works by conducting a tournament among a set of competing methods. Because personal computers and cloud computing are fast, and because we have coded very efficient algorithms into the SmartForecasts’ Automatic forecasting engine, it is practical to take a purely empirical approach to deciding which extrapolative forecasting method to use. This means that you can afford to try out a number of approaches and then retain the one that does best at forecasting the particular data series at hand. SmartForecasts fully automates this process for you by trying the different forecasting methods in a simulated forecasting tournament. The winner of the tournament is the method that comes closest to  predicting new data values from old. Accuracy is measured by average absolute error (that is, the average error, ignoring any minus signs). The average is computed over a set of forecasts, each using a portion of the data, in a process known as sliding simulation.

 

Sliding simulation

The sliding simulation sweeps repeatedly through ever-longer portions of the historical data, in each case forecasting ahead the desired number of periods in your forecast horizon. Suppose there are 36 historical data values and you need to forecast six periods ahead. Imagine that you want to assess the forecast accuracy of some particular method, say a moving average of four observations, on the data series at hand.

At one point in the sliding simulation, the first 24 points (only) are used to forecast the 25th through 30th historical data values, which we temporarily regard as unknown. We say that points 25-30 are “held out” of the analysis. Computing the absolute values of the differences between the six forecasts and the corresponding actual historical values provides one instance each of a 1-step, 2-step, 3-step, 4-step, 5-step, and 6-step ahead absolute forecast error. Repeating this process using the first 25 points provides more instances of 1-step, 2-step, 3-step ahead errors, and so on. The average over all of the absolute error estimates obtained this way provides a single-number summary of accuracy.

 

Methods used in Automatic forecasting

Normally, there are six extrapolative forecasting methods competing in the Automatic forecasting tournament:

  • Simple moving average
  • Linear moving average
  • Single exponential smoothing
  • Double exponential smoothing
  • Additive version of Winters’ exponential smoothing
  • Multiplicative version of Winters’ exponential smoothing

 

The latter two methods are appropriate for seasonal series; however, they are automatically excluded from the tournament if there are fewer than two full seasonal cycles of data (for example, fewer than 24 periods of monthly data or eight periods of quarterly data).

These six classical, smoothing-based methods have proven themselves to be easy to understand, easy to compute and accurate. You can exclude any of these methods from the tournament if you have a preference for some of the competitors and not others.

 

 

 

 

6 Observations About Successful Demand Forecasting Processes

1. Forecasting is an art that requires a mix of professional judgment and objective statistical analysis. Successful demand forecasts require a baseline prediction leveraging statistical forecasting methods. Once established, the process can focus on how best to adjust statistical forecasts based on your own insights and business knowledge.

2. The forecasting process is usually iterative. You may need to make several refinements of your initial forecast before you are satisfied. It is important to be able to generate and compare alternative forecasts quickly and easily. Tracking accuracy of these forecasts over time, including alternatives that were not used, helps inform and improve the process.

3. The credibility of forecasts depends heavily on graphical comparisons with historical data.  A picture is worth a thousand words, so always display forecasts via instantly available graphical displays with supporting numerical reports.

4. One of the major technical tasks in forecasting is to match the choice of forecasting technique to the nature of the data. Effective demand forecasting processes employ capabilities that identify the right method to use.  Features of a data series like trend, seasonality or abrupt shifts in level suggest certain techniques instead of others. An automatic selection, which selects and uses the appropriate forecasting method automatically, saves time and ensures your baseline forecast is as accurate as possible.

5. Successful demand forecasting processes work in tandem with other business processes.   For example, forecasting can be an essential first step in financial analysis.  In addition, accurate sales and product demand forecasts are fundamental inputs to a manufacturing company’s production planning and inventory control processes.

6. A good planning process recognizes that forecasts are never exactly correct. Because some error creeps into even the best forecasting process, one of the most useful supplements to a forecast are honest estimates of its margin of error and forecast bias.

 

 

 

 

What makes a probabilistic forecast?

What’s all the hoopla around the term “probabilistic forecasting?” Is it just a more recent marketing term some software vendors and consultants have coined to feign innovation? Is there any real tangible difference compared to predecessor “best fit” techniques?  Aren’t all forecasts probabilistic anyway?

To answer this question, it is helpful to think about what the forecast really is telling you in terms of probabilities.  A “good” forecast should be unbiased and therefore yield a 50/50 probability being higher or lower than the actual.  A “bad” forecast will build in subjective buffers (or artificially depress the forecast) and result in demand that is either biased high or low.  Consider a salesperson that intentionally reduces their forecast by not reporting sales they expect to close to be “conservative.” Their forecasts will have negative forecast bias as actuals will nearly always be higher than what they predicted.   On the other hand, consider a customer that provides an inflated forecast to their manufacturer.  Worried about stockouts, they overestimate demand to ensure their supply.  Their forecast will have a positive bias as actuals will nearly always be lower than what they predicted. 

These types of one-number forecasts described above are problematic.  We refer to these predictions as “point forecasts” since they represent one point (or a series of points over time) on a plot of what might happen in the future.   They don’t provide a complete picture because to make effective business decisions such as determining how much inventory to stock or the number of employees to be available to support demand requires detailed information on how much lower or higher the actual will be!  In other words, you need the probabilities for each possible outcome that might occur.  So, by itself, the point forecast isn’t probabilistic one.   

To get a probabilistic forecast, you need to know the distribution of possible demands around that forecast.  Once you compute this, the forecast becomes “probabilistic.”  How forecasting systems and practitioners such as demand planners, inventory analysts, material managers, and CFOs determine these probabilities is the heart of the question: “what makes a forecast probabilistic?”     

Normal Distributions
Most forecasts and the systems/software that produce them start with a prediction of demand.  Then they figure out the range of possible demands around that forecast by making incorrect theoretical assumptions about the distribution.  If you’ve ever used a “confidence interval” in your forecasting software, this is based on a probability distribution around the forecast.  The way this range of demand is determined is to assume a particular type of distribution.  Most often this means assuming a bell shaped, otherwise known as a normal distribution.  When demand is intermittent, some inventory optimization and demand forecasting systems may assume the demand is Poisson shaped. 

After creating the forecast, the assumed distribution is slapped around the demand forecast and you then have your estimate of probabilities for every possible demand – i.e., a “probabilistic forecast.”  These estimates of demand and associated probabilities can then be used to determine extreme values or anything in between if desired.  The extreme values at the upper percentiles of the distribution (i.e., 92%, 95%, 99%, etc.) are most often used as inputs to inventory control models.  For example, reorder points for critical spare parts in an electrical utility might be planned based on a 99.5% service level or even higher.  While a non-critical service part might be planned at an 85% or 90% service level.

The problem with making assumptions about the distribution is that you’ll get these probabilities wrong.  For example, if the demand isn’t normally distributed but you are forcing a bell shaped/normal curve on the forecast then how can then the probabilities will be incorrect.  Specifically, you might want to know the level of inventory needed to achieve a 99% probability of not running out of stock and the normal distribution will tell you to stock 200 units.  But when compared to the actual demand, you come to find out that 200 units only filled demand entirely in 40/50 observations.  So, instead of getting a 99% service level you only achieved an 80% service level!  This is a gigantic miss resulting from trying to fit a square peg into a round hole.  The miss would have led you to take an incorrect inventory reduction.

Empirically Estimated Distributions are Smart
To produce a smart (read accurate) probabilistic forecast you need to first estimate the distribution of demand empirically without any naïve assumptions about the shape of the distribution.  Smart Software does this by running tens of thousands of simulated demand and lead time scenarios.  Our solution leverages patented techniques that incorporate Monte Carlo simulation, Statistical Bootstrapping, and other methods.  The scenarios are designed to simulate real life uncertainty and randomness of both demand and lead times.  Actual historical observations are utilized as the primary inputs, but the solution will give you the option of simulating from non-observed values as well.  For example, just because 100 units was the peak historical demand, that doesn’t mean you are guaranteed to peak out at 100 in the future.  After the scenarios are done you will know the exact probability for each outcome. The “point” forecast then becomes the center of that distribution.  Each future period over time is expressed in terms of the probability distribution associated with that period.

Leaders in Probabilistic Forecasting
Smart Software, Inc. was the first company to ever introduce statistical bootstrapping as part of a commercially available demand forecasting software system twenty years ago.  We were awarded a US patent at the time for it and named a finalist in the APICS Corporate Awards of Excellence for Technological Innovation.  Our NSF Sponsored research that led to this and other discoveries were instrumental in advancing forecasting and inventory optimization.    We are committed to ongoing innovation, and you can find further information about our most recent patent here.