A Rough Map of Forecasting-Related Terms

People new to the jobs of “demand planner” or “supply planner” are likely to have questions about the various forecasting terms and methods used in their jobs. This note may help by explaining these terms and showing how they relate.

 

Demand Planning

Demand planning is about how much of what you have to sell will go out the door in the future, e.g., how many what-nots you will sell next quarter. Here are six methodologies often used in demand planning.

  • Statistical Forecasting
    • These methods use demand history to forecast future values. The two most common methods are curve fitting and data smoothing.
    • Curve fitting matches a simple mathematical function, like the equation for a straight line (y= a +b∙t) or an interest-rate type curve (y=a∙bt), to the demand history. Then it extends that line or curve forward in time as the forecast.
    • In contrast, data smoothing does not result in an equation. Instead it sweeps through the demand history, averaging values along the way, to create a smoother version of the history. These methods are called exponential smoothing and moving average. In the simplest case (i.e., in the absence of trend or seasonality, for which variants exist), the goal is to estimate the current average level of demand and use that as the forecast.
    • These methods produce “point forecasts”, which are single-number estimates for each future time period (e.g., “Sales in March will be 218 units”). Sometimes they come with estimates of potential forecast error bolted on using separate models of demand variability (“Sales in March will be 218 ± 120 units”).
  • Probabilistic Forecasting
    • This approach keys on the randomness of demand and works hard to estimate forecast uncertainty. It regards forecasting less as an exercise in cranking out specific numbers and more as an exercise in risk management.
    • It explicitly models the variability in demand and uses that to present results in the form of large numbers of scenarios constructed to show the full range of possible demand sequences. These are especially useful in tactical supply planning tasks, such as setting reorder points and order quantities.
  • Causal Forecasting
    • Statistical forecasting models use as inputs only the past demand history of the item in question. They regard the up-and-down wiggles in the demand plot as the end result of myriad unnamed factors (interest rates, the price of tea in China, phases of the moon, whatever). Causal forecasting explicitly identifies one or more influences (interest rates, advertising spend, competitors’ prices, …) that could plausibly influence sales. Then it builds an equation relating the numerical values of these “drivers” or “causal factors” to item sales. The equation’s coefficients are estimated by “regression analysis”.
  • Judgemental Forecasting
    • Golden Gut. Despite the general availability of gobs of data, some companies pay little attention to the numbers and give greater weight to the subjective judgements of an executive deemed to have a “Golden Gut”, which allows him or her to use “gut feel” to predict what future demand will be. If that person has great experience, has spent a career actually looking at the numbers, and is not prone to wishful thinking or other forms of cognitive bias, the Golden Gut can be a cheap, fast way to plan. But there is good evidence from studies of companies run this way that relying on the Golden Gut is risky.
    • Group Consensus. More common is a process that uses a periodic meeting to create a group consensus forecast. The group will have access to shared objective data and forecasts, but members will also have knowledge of factors that may not be measured well or at all, such as consumer sentiment or the stories relayed by sales reps. It is helpful to have a shared, objective starting point for these discussions consisting of some sort of objective statistical analysis. Then the group can consider adjusting the statistical forecast. This process anchors the forecast in objective reality but exploits all the other information available outside the forecasting database.
    • Scenario Generation. Sometimes several people will meet and discuss “strategic what-if” questions. “What if we lose our Australian customers?” “What if our new product roll-out is delayed by six months?” “What if our sales manager for the mid-west jumps to a competitor?” These bigger-picture questions can have implications for item-specific forecasts and might be added to any group-consensus forecasting meeting.
  • New product forecasting
    • New products, by definition, have no sales history to support statistical, probability, or causal forecasting. Subjective forecasting methods can always be used here, but these often rely on a dangerous ratio of hopes to facts. Fortunately, there is at least partial support for objective forecasting in the form of curve fitting.
    • A graph of the cumulative sales of an item often describes some sort of “S-curve”, i.e., a graph that starts at zero, builds up, then levels off to a final lifetime total sales. The curve gets its name because it looks like a letter S somehow smeared and stretched to the right. Now there are an infinite number of S-curves, so forecasters typically pick an equation and subjectively specify some key parameter values, like when sales will hit 25%, 50% and 75% of total lifetime sales and what that final level will be. This is also overtly subjective, but it produces detailed period-by-period forecasts that can be updated as experience builds up. Finally, S-curves are sometimes shaped to match the known history of a similar, predecessor product (“Sales for our last gizmo looked like this, so let’s use that as a template.”).

 

Supply Planning

Demand planning feeds into supply planning by predicting future sales (e.g., for finished goods) or usage (e.g., for spare parts). Then it is up to supply planning to make sure the items in question will be available to sell or to use.

  • Dependent demand
    • Dependent demand is demand that can be determined by its relationship to demand for another item. For instance, a bill of materials may show that a little red wagon consists of a body, a pull bar, four wheels, two axles, and various fasteners to keep the wheels on the axles and connect the pull bar to the body. So if you hope to sell 10 little red wagons, you’d better make 10, which means you need 10×2 = 20 axles, 10×4 = 40 wheels, etc. Dependent demand governs raw materials purchasing, component and subsystems purchasing, even personnel hiring (10 wagons need one high-school kid to put them together over a 1 hour shift).
    • If you have multiple products with partially overlapping bills of materials, you have a choice of two forecasting approaches. Suppose you sell not only little red wagons but little blue baby carriages and that both use the same axles. To predict the number of axles you need you could (1) predict the dependent demand for axles from each product and add the forecasts or (2) observe the total demand history for axles as its own time series and forecast that separately. Which works better is an empirical question that can be tested.
  • Inventory management
    • Inventory management entails many different tasks. These include setting inventory control parameters such as reorder points and order quantities, reacting to contingencies such as stockouts and order expediting, setting staffing levels, and selecting suppliers.
  • Forecasting plays a role in the first three. The number of replenishment orders that will be made in a year for each product determines how many people are needed to cut PO’s. The number and severity of stockouts in a year determines the number of contingencies that must be handled. The number of PO’s and stockouts in a year will be random but be governed by the choices of inventory control parameters. The implications of any such choices can be modeled by inventory simulations. These simulations will be driven by detailed demand scenarios generated by probabilistic forecasts.

 

 

 

Six Demand Planning Best Practices You Should Think Twice About

Every field, including forecasting, accumulates folk wisdom that eventually starts masquerading as “best practices.”  These best practices are often wise, at least in part, but they often lack context and may not be appropriate for certain customers, industries, or business situations.  There is often a catch, a “Yes, but”. This note is about six usually true forecasting precepts that nevertheless do have their caveats.

 

  1. Organize your company around a one-number forecast. This sounds sensible: it’s good to have a shared vision. But each part of the company will have its own idea about which number is the number. Finance may want quarterly revenue, Marketing may want web site visits, Sales may want churn, Maintenance may want mean time to failure. For that matter, each unit probably has a handful of key metrics. You don’t need a slogan – you need to get your job done.

 

  1. Incorporate business knowledge into a collaborative forecasting process. This is a good general rule, but if your collaborative process is flawed, messing with a statistical forecast via management overrides can decrease accuracy. You don’t need a slogan – you need to measure and compare the accuracy of any and all methods and go with the winners.

 

  1. Forecast using causal modeling. Extrapolative forecasting methods take no account of the underlying forces driving your sales, they just work with the results. Causal modeling takes you deeper into the fundamental drivers and can improve both accuracy and insight. However, causal models (implemented through regression analysis) can be less accurate, especially when they require forecasts of the drivers (“predictions of the predictors”) rather than simply plugging in recorded values of lagged predictor variables. You don’t need a slogan: You need a head-to-head comparison.

 

  1. Forecast demand instead of shipments. Demand is what you really want, but “composing a demand signal” can be tricky: what do you do with internal transfers? One-off’s?  Lost sales? Furthermore, demand data can be manipulated.  For example, if customers intentionally don’t place orders or try to game their orders by ordering too far in advance, then order history won’t be better than shipment history.  At least with shipment history, it’s accurate:  You know what you shipped. Forecasts of shipments are not forecasts of  “demand”, but they are a solid starting point.

 

  1. Use Machine Learning methods. First, “Machine learning” is an elastic concept that includes an ever-growing set of alternatives. Under the hood of many ML advertised models is just an auto-pick an extrapolative forecast method (i.e., best fit) which while great at forecasting normal demand, has been around since the 1980’s (Smart Software was the first company to release an auto-pick method for the PC).   ML models are data hogs that require larger data sets than you may have available. Properly choosing then training an ML model requires a level of statistical expertise that is uncommon in many manufacturing and distribution businesses. You might want to find somebody to hold your hand before you start playing this game.

 

  1. Removing outliers creates better forecasts. While it is true that very unusual spikes or drops in demand will mask underlying demand patterns such as trend or seasonality, it isn’t always true that you should remove the spikes. Often these demand surges reflect the uncertainty that can randomly interfere with your business and thus need to be accounted for.  Removing this type of data from your demand forecast model might make the data more predictable on paper but will leave you surprised when it happens again. So, be careful about removing outliers, especially en masse.

 

 

 

 

Elephants and Kangaroos ERP vs. Best of Breed Demand Planning

“Despite what you’ve seen in your Saturday morning cartoons, elephants can’t jump, and there’s one simple reason: They don’t have to. Most jumpy animals—your kangaroos, monkeys, and frogs—do it primarily to get away from predators.”  — Patrick Monahan, Science.org, Jan 27, 2016.

Now you know why the largest ERP companies can’t develop high quality best-of-breed like solutions. They never had to, so they never evolved to innovate outside of their core focus. 

However, as ERP systems have become commoditized, gaps in their functionality became impossible to ignore. The larger players sought to protect their share of customer wallet by promising to develop innovative add-on applications to fill all the white spaces.  But without that “innovation muscle,” many projects failed, and mountains of technical debt accumulated.

Best-of-breed companies evolved to innovate and have deep functional expertise in specific verticals.  The result is that best of breed ERP add-ons are easier to use, have more features, and deliver more value than the native ERP modules they replace. 

If your ERP provider has already partnered with an innovative best of breed add-on provider*, you’re all set! But if you can only get the basics from your ERP, go with a best-of-breed add-on that has a bespoke integration to the ERP. 

A great place to start your search is to look for ERP demand planning add-ons that add brains to the ERP’s brawn, i.e., those that support inventory optimization and demand forecasting.  Leverage add-on tools like Smart’s statistical forecasting, demand planning, and inventory optimization apps to develop forecasts and stocking policies that are fed back to the ERP system to drive daily ordering. 

*App-stores are a license for the best of breed to sell into the ERP companies base –  being listed  partnerships.

 

 

 

 

Is your demand planning and forecasting process a black box?

There’s one thing I’m reminded of almost every day at Smart Software that puzzle me: most companies do not understand how forecasts are created, and stocking policies are determined.  It’s an organizational black box. Here is an example from a recent sales call:

How do you forecast?
We use history.

How do you use history?
What do you mean?

Well, you can take an average of the last year, last two years, average the most recent periods, or use some other type of formula to generate the forecast.
I’m pretty sure we use an average of the last 12 months.

Why 12 months instead of a different amount of history?
12 months is a good amount of time to use because it doesn’t get skewed by older data but it’s recent enough

How do you know it’s more accurate than using 18 months or some other length of history?
We don’t know. We do adjust the forecasts based on feedback from sales.  

Do you know if the adjustments make things more accurate or less than if you just used the average?
We don’t know but are confident that forecasts are inflated

What do the inventory buyers do then if they think the numbers are inflated?
They have lots of business knowledge and adjust their buys accordingly

So, is it fair to say they would ignore the forecasts at least some of the time?
Yes, some of the time.

How do the buyers decide when to order more? Do you have a reorder point or safety stock specified in your ERP system that helps guide these decisions?
Yes, we use a safety stock field.

How is safety stock calculated?
Buyers determine this based on the importance of the item, lead times, and other considerations such as how many customers purchase the item, the velocity of the item, it’s cost.  They’ll carry different amounts of safety stock depending on this.

The discussion continued. The main takeaway here is that when you scratch just below the surface, far more questions are revealed than answers.  This often means that the inventory planning and demand forecast process is highly subjective, varies from planner to planner, is not well understood by the rest of the organization, and likely to be reactive.  As Tom Willemain has described it’s “chaos masked by improvisation.”   The “as-is” process needs to be fully identified and documented.  Only then can gaps be exposed, and improvements can be made.   Here is a list of 10 questions  you can ask that will reveal your organization’s true forecasting, demand planning, and inventory planning process.

 

 

 

 

 

The Role of Trust in the Demand Forecasting Process Part 2: What do you Trust

“Regardless of how much effort is poured into training forecasters and developing elaborate forecast support systems, decision-makers will either modify or discard the predictions if they do not trust them.”  — Dilek Onkal, International Journal of Forecasting 38:3 (July-September 2022), p.802.

The words quoted above grabbed my attention and prompted this post. Those of a geekly persuasion, like your blogger, are inclined to think of forecasting as a statistical problem. While that is obviously true, those of a certain age, like your blogger, understand that forecasting is also a social activity and therefore has a large human component.

What Do You Trust?

There is a related dimension of trust: not who do you trust but what do you trust? By this, I mean both data and software.

Trust in Data

Trust in data underpins trust in the forecaster using the data. Most of our customers have their data in an ERP system. This data must be understood as a key corporate asset. For the data to be trustworthy, it must have the “three C’s”, i.e., it must be correct, complete, and current.

Correctness is obviously fundamental. We once had a customer who was implementing a new, strong forecasting process, but found the results completely at odds with their sense of what was happening in the business. It turned out that several of their data streams were incorrect by a factor of two, which is a huge error. Of course, this set back the implementation process until they could identify and correct all the gross errors in their demand data.

There is a less obvious point to be made about correctness. That is, data are random, so what you see now is not likely to be what you see next. Planning production based on the assumption that next week’s demand will be exactly the same as this week’s demand is clearly foolish, but classical formula-based forecasting models like the exponential smoothing mentioned above will project the same number throughout the forecast horizon. This is where scenario-based planning is essential for coping with the inevitable fluctuations in key variables such as customers’ demands and suppliers’ replenishment lead times.

Completeness is the second requirement for data to be trusted. Our software ultimately gets much of its value from exposing the links between operational decisions (e.g., selecting the reorder points governing replenishment of stock) and business-related metrics like inventory costs. Yet often implementation of forecasting software is delayed because item demand information is available someplace, but holding, ordering and/or shortage costs are not.  Or, to cite another recent example, a customer was able to properly size only half their inventory of spares for reparable parts because nobody had been tracking when the other half was breaking down, meaning there was no information on mean time before failure (MTBF), meaning it was not possible to model the breakdown behavior of half the fleet of reparable spares.

Finally, the currency of data matters. As the speed of business increases and company planning cycles drop from a quarterly or monthly tempo to a weekly or daily tempo, it becomes desirable to exploit the agility provided by overnight uploads of daily transactional data into the cloud. This allows high-frequency adjustments of forecasts and/or inventory control parameters for items that experience high volatility and sudden shifts in demand. The fresher the data, the more trustworthy the analysis.

Trust in Demand Forecasting Software

Even with high-quality data, forecasters must still trust the analytical software that processes the data. This trust must extend to both the software itself and to the computational environment in which it functions.

If forecasters used on-premises software, they must rely on their own IT departments to safeguard the data and keep it available for use. If they wish instead to exploit the power of cloud-based analytics, customers must trust their confidential information to their software vendors. Professional-level software, such as ours, justifies customers’ trust through SOC 2 certification. SOC 2 certification was developed by the American Institute of CPAs and defines criteria for managing customer data based on five “trust service principles”—security, availability, processing integrity, confidentiality, and privacy.

What about the software itself? What is needed to make it trustworthy? The main criteria here are the correctness of algorithms and functional reliability. If the vendor has a professional program development process, there will be little chance that the software ends up computing the wrong numbers because of a programming error. And if the vendor has a rigorous quality assurance process, there will be little chance that the software will crash just when the forecaster is on deadline or must deal with a pop-up analysis for a special situation.

Summary

To be useful, forecasters and their forecasts must be trusted by decision-makers. That trust depends on characteristics of forecasters and their processes and communication. It also depends on the quality of the data and software used in creating the forecasts.

 

Read the 1st part of this Blog “Who do you Trust” here: https://smartcorp.com/forecasting/the-role-of-trust-in-the-demand-forecasting-process-part-1-who/