Weathering a Demand Forecast

For some of our customers, weather has a significant influence on demand. Extreme short-term weather events like fires, droughts, hot spells, and so forth can have a significant near-term influence on demand.

There are two ways to factor weather into a demand forecast: indirectly and directly. The indirect route is easier using the scenario-based approach of Smart Demand Planner. The direct approach requires a tailored special project requiring additional data and hand-crafted modeling.

Indirect Accounting for Weather

The standard model built into Smart Demand Planner (SDP) accommodates weather effects in four ways:

  1. If the world is steadily getting warmer/colder/drier/wetter in ways that impact your sales, SDP detects these trends automatically and incorporates them into the demand scenarios it generates.
  2. If your business has a regular rhythm in which certain days of the week or certain months of the year have consistently higher or lower-than-average demand, SDP also automatically detects this seasonality and incorporates it into its demand scenarios.
  3. Often it is the cussed randomness of weather that interferes with forecast accuracy. We often refer to this effect as “noise”. Noise is a catch-all term that incorporates all kinds of random trouble. Besides weather, a geopolitical flareup, the surprise failure of a regional bank, or a ship getting stuck in the Suez Canal can and have added surprises to product demand. SDP assesses the volatility of demand and reproduces it in its demand scenarios.
  4. Management overrides. Most of the time, customers let SDP churn away to automatically generate tens of thousands of demand scenarios. But if users feel the need to touch up specific forecasts using their insider knowledge, SDP can make that happen through management overrides.

Direct Accounting for Weather

Sometimes a user will be able to articulate subject matter expertise linking factors outside their company (such as interest rates or raw materials costs or technology trends) to their own aggregate sales. In these situations, Smart Software can arrange for one-off special projects that provide alternative (“causal”) models to supplement our standard statistical forecasting models. Contact your Smart Software representative to discuss a possible causal modeling project.

Meanwhile, don’t forget your umbrella.

 

 

 

A Rough Map of Forecasting-Related Terms

People new to the jobs of “demand planner” or “supply planner” are likely to have questions about the various forecasting terms and methods used in their jobs. This note may help by explaining these terms and showing how they relate.

 

Demand Planning

Demand planning is about how much of what you have to sell will go out the door in the future, e.g., how many what-nots you will sell next quarter. Here are six methodologies often used in demand planning.

  • Statistical Forecasting
    • These methods use demand history to forecast future values. The two most common methods are curve fitting and data smoothing.
    • Curve fitting matches a simple mathematical function, like the equation for a straight line (y= a +b∙t) or an interest-rate type curve (y=a∙bt), to the demand history. Then it extends that line or curve forward in time as the forecast.
    • In contrast, data smoothing does not result in an equation. Instead it sweeps through the demand history, averaging values along the way, to create a smoother version of the history. These methods are called exponential smoothing and moving average. In the simplest case (i.e., in the absence of trend or seasonality, for which variants exist), the goal is to estimate the current average level of demand and use that as the forecast.
    • These methods produce “point forecasts”, which are single-number estimates for each future time period (e.g., “Sales in March will be 218 units”). Sometimes they come with estimates of potential forecast error bolted on using separate models of demand variability (“Sales in March will be 218 ± 120 units”).
  • Probabilistic Forecasting
    • This approach keys on the randomness of demand and works hard to estimate forecast uncertainty. It regards forecasting less as an exercise in cranking out specific numbers and more as an exercise in risk management.
    • It explicitly models the variability in demand and uses that to present results in the form of large numbers of scenarios constructed to show the full range of possible demand sequences. These are especially useful in tactical supply planning tasks, such as setting reorder points and order quantities.
  • Causal Forecasting
    • Statistical forecasting models use as inputs only the past demand history of the item in question. They regard the up-and-down wiggles in the demand plot as the end result of myriad unnamed factors (interest rates, the price of tea in China, phases of the moon, whatever). Causal forecasting explicitly identifies one or more influences (interest rates, advertising spend, competitors’ prices, …) that could plausibly influence sales. Then it builds an equation relating the numerical values of these “drivers” or “causal factors” to item sales. The equation’s coefficients are estimated by “regression analysis”.
  • Judgemental Forecasting
    • Golden Gut. Despite the general availability of gobs of data, some companies pay little attention to the numbers and give greater weight to the subjective judgements of an executive deemed to have a “Golden Gut”, which allows him or her to use “gut feel” to predict what future demand will be. If that person has great experience, has spent a career actually looking at the numbers, and is not prone to wishful thinking or other forms of cognitive bias, the Golden Gut can be a cheap, fast way to plan. But there is good evidence from studies of companies run this way that relying on the Golden Gut is risky.
    • Group Consensus. More common is a process that uses a periodic meeting to create a group consensus forecast. The group will have access to shared objective data and forecasts, but members will also have knowledge of factors that may not be measured well or at all, such as consumer sentiment or the stories relayed by sales reps. It is helpful to have a shared, objective starting point for these discussions consisting of some sort of objective statistical analysis. Then the group can consider adjusting the statistical forecast. This process anchors the forecast in objective reality but exploits all the other information available outside the forecasting database.
    • Scenario Generation. Sometimes several people will meet and discuss “strategic what-if” questions. “What if we lose our Australian customers?” “What if our new product roll-out is delayed by six months?” “What if our sales manager for the mid-west jumps to a competitor?” These bigger-picture questions can have implications for item-specific forecasts and might be added to any group-consensus forecasting meeting.
  • New product forecasting
    • New products, by definition, have no sales history to support statistical, probability, or causal forecasting. Subjective forecasting methods can always be used here, but these often rely on a dangerous ratio of hopes to facts. Fortunately, there is at least partial support for objective forecasting in the form of curve fitting.
    • A graph of the cumulative sales of an item often describes some sort of “S-curve”, i.e., a graph that starts at zero, builds up, then levels off to a final lifetime total sales. The curve gets its name because it looks like a letter S somehow smeared and stretched to the right. Now there are an infinite number of S-curves, so forecasters typically pick an equation and subjectively specify some key parameter values, like when sales will hit 25%, 50% and 75% of total lifetime sales and what that final level will be. This is also overtly subjective, but it produces detailed period-by-period forecasts that can be updated as experience builds up. Finally, S-curves are sometimes shaped to match the known history of a similar, predecessor product (“Sales for our last gizmo looked like this, so let’s use that as a template.”).

 

Supply Planning

Demand planning feeds into supply planning by predicting future sales (e.g., for finished goods) or usage (e.g., for spare parts). Then it is up to supply planning to make sure the items in question will be available to sell or to use.

  • Dependent demand
    • Dependent demand is demand that can be determined by its relationship to demand for another item. For instance, a bill of materials may show that a little red wagon consists of a body, a pull bar, four wheels, two axles, and various fasteners to keep the wheels on the axles and connect the pull bar to the body. So if you hope to sell 10 little red wagons, you’d better make 10, which means you need 10×2 = 20 axles, 10×4 = 40 wheels, etc. Dependent demand governs raw materials purchasing, component and subsystems purchasing, even personnel hiring (10 wagons need one high-school kid to put them together over a 1 hour shift).
    • If you have multiple products with partially overlapping bills of materials, you have a choice of two forecasting approaches. Suppose you sell not only little red wagons but little blue baby carriages and that both use the same axles. To predict the number of axles you need you could (1) predict the dependent demand for axles from each product and add the forecasts or (2) observe the total demand history for axles as its own time series and forecast that separately. Which works better is an empirical question that can be tested.
  • Inventory management
    • Inventory management entails many different tasks. These include setting inventory control parameters such as reorder points and order quantities, reacting to contingencies such as stockouts and order expediting, setting staffing levels, and selecting suppliers.
  • Forecasting plays a role in the first three. The number of replenishment orders that will be made in a year for each product determines how many people are needed to cut PO’s. The number and severity of stockouts in a year determines the number of contingencies that must be handled. The number of PO’s and stockouts in a year will be random but be governed by the choices of inventory control parameters. The implications of any such choices can be modeled by inventory simulations. These simulations will be driven by detailed demand scenarios generated by probabilistic forecasts.

 

 

 

Six Demand Planning Best Practices You Should Think Twice About

Every field, including forecasting, accumulates folk wisdom that eventually starts masquerading as “best practices.”  These best practices are often wise, at least in part, but they often lack context and may not be appropriate for certain customers, industries, or business situations.  There is often a catch, a “Yes, but”. This note is about six usually true forecasting precepts that nevertheless do have their caveats.

 

  1. Organize your company around a one-number forecast. This sounds sensible: it’s good to have a shared vision. But each part of the company will have its own idea about which number is the number. Finance may want quarterly revenue, Marketing may want web site visits, Sales may want churn, Maintenance may want mean time to failure. For that matter, each unit probably has a handful of key metrics. You don’t need a slogan – you need to get your job done.

 

  1. Incorporate business knowledge into a collaborative forecasting process. This is a good general rule, but if your collaborative process is flawed, messing with a statistical forecast via management overrides can decrease accuracy. You don’t need a slogan – you need to measure and compare the accuracy of any and all methods and go with the winners.

 

  1. Forecast using causal modeling. Extrapolative forecasting methods take no account of the underlying forces driving your sales, they just work with the results. Causal modeling takes you deeper into the fundamental drivers and can improve both accuracy and insight. However, causal models (implemented through regression analysis) can be less accurate, especially when they require forecasts of the drivers (“predictions of the predictors”) rather than simply plugging in recorded values of lagged predictor variables. You don’t need a slogan: You need a head-to-head comparison.

 

  1. Forecast demand instead of shipments. Demand is what you really want, but “composing a demand signal” can be tricky: what do you do with internal transfers? One-off’s?  Lost sales? Furthermore, demand data can be manipulated.  For example, if customers intentionally don’t place orders or try to game their orders by ordering too far in advance, then order history won’t be better than shipment history.  At least with shipment history, it’s accurate:  You know what you shipped. Forecasts of shipments are not forecasts of  “demand”, but they are a solid starting point.

 

  1. Use Machine Learning methods. First, “Machine learning” is an elastic concept that includes an ever-growing set of alternatives. Under the hood of many ML advertised models is just an auto-pick an extrapolative forecast method (i.e., best fit) which while great at forecasting normal demand, has been around since the 1980’s (Smart Software was the first company to release an auto-pick method for the PC).   ML models are data hogs that require larger data sets than you may have available. Properly choosing then training an ML model requires a level of statistical expertise that is uncommon in many manufacturing and distribution businesses. You might want to find somebody to hold your hand before you start playing this game.

 

  1. Removing outliers creates better forecasts. While it is true that very unusual spikes or drops in demand will mask underlying demand patterns such as trend or seasonality, it isn’t always true that you should remove the spikes. Often these demand surges reflect the uncertainty that can randomly interfere with your business and thus need to be accounted for.  Removing this type of data from your demand forecast model might make the data more predictable on paper but will leave you surprised when it happens again. So, be careful about removing outliers, especially en masse.

 

 

 

 

Correlation vs Causation: Is This Relevant to Your Job?

Outside of work, you may have heard the famous dictum “Correlation is not causation.” It may sound like a piece of theoretical fluff that, though involved in a recent Noble Prize in economics, isn’t relevant to your work as a demand planner. Is so, you may be only partially correct.

Extrapolative vs Causal Models

Most demand forecasting uses extrapolative models. Also called time-series models, these forecast demand using only the past values of an item’s demand. Plots of past values reveal trend and seasonality and volatility, so there is a lot they are good for. But there is another type of model – causal models —that can potentially improve forecast accuracy beyond what you can get from extrapolative models.

Causal models bring more input data to the forecasting task: information on presumed forecast “drivers” external to the demand history of an item. Examples of potentially useful causal factors include macroeconomic variables like the inflation rate, the rate of GDP growth, and raw material prices. Examples not tied to the national economy include industry-specific growth rates and your own and competitors’ ad spending.  These variables are usually used as inputs to regression models, which are equations with demand as an output and causal variables as inputs.

Forecasting using Causal Models

Many firms have an S&OP process that involves a monthly review of statistical (extrapolative) forecasts in which management adjusts forecasts based on their judgement. Often this is an indirect and subjective way to work causal models into the process without doing the regression modeling.

To actually make a causal regression model, first you have to nominate a list of potentially-useful causal predictor variables. These may come from your subject matter expertise. For example, suppose you manufacture window glass. Much of your glass may end up in new homes and new office buildings. So, the number of new homes and offices being built are plausible predictor variables in a regression equation.

There is a complication here: if you are using the equation to predict something, you must first predict the predictors. For example, sales of glass next quarter may be strongly related to numbers of new homes and new office buildings next quarter. But how many new homes will there be next quarter? That’s its own forecasting problem. So, you have a potentially powerful forecasting model, but you have extra work to do to make it usable.

There is one way to simplify things: if the predictor variables are “lagged” versions of themselves. For example, the number of new building permits issued six months ago may be a good predictor of glass sales next month. You don’t have to predict the building permit data – you just have to look it up.

Is it a causal relationship or just a spurious correlation?

Causal models are the real deal: there is an actual mechanism that relates the predictor variable to the predicted variable. The example of predicting glass sales from building permits is an example.

A correlation relationship is more iffy. There is a statistical association that may or may not provide a solid basis for forecasting. For example, suppose you sell a product that happens to appeal most strongly to Dutch people but you don’t realize this. The Dutch are, on average, the tallest people in Europe. If your sales are increasing and the average height of Europeans is increasing, you might use that relationship to good effect. However, if the proportion of Dutch in the Euro zone is decreasing while the average height is increasing because the mix of men versus women is shifting toward men, what can go wrong? You will expect sales to increase because average height is increasing. But your sales are really mostly to the Dutch, and their relative share of the population is shrinking, so your sales are really going to decrease instead. In this case the association between sales and customer height is a spurious correlation.

How can you tell the difference between true and spurious relationships? The gold standard is to do a rigorous scientific experiment. But you are not likely to be in position to do that. Instead, you have to rely on your personal “mental model” of how your market works. If your hunches are right, then your potential causal models will correlate with demand and causal modeling will pay off for you, either to supplement extrapolative models or to replace them.

 

 

 

 

Fifteen questions that reveal how forecasts are computed in your company

In a recent LinkedIn post, I detailed four questions that, when answered, will reveal how forecasts are being used in your business.  In this article, we’ve listed questions you can ask that will reveal how forecasts are created.

1. When we ask users how they create forecasts, their answer will often be “we use history.” This obviously isn’t enough information, as there are different types of demand history that require different forecasting methods. If you are using historical data, then make sure to find out if you are using an averaging model, a trending model, a seasonal model, or something else to forecast.

2. Once you know the model used, ask about the parameter values of those models. The forecast output of an “average” will differ, sometimes significantly, depending on the number of periods you are averaging.  So, find out whether you are using an average of the last 3 months, 6 months, 12 months, etc.

3. If you are using trending models, ask how the model weights are set. For example, in a trending model, such as double exponential smoothing, the forecasts will differ significantly depending on how the calculations weight recent data compared to older data (higher weights put more emphasis on the recent data).

4. If you are using seasonal models, the forecast results are going to be impacted by the “level” and “trending weights” used. You should also determine whether seasonal periods are forecasted with multiplicative or additive seasonality.  (Additive seasonality says, e.g., “Add 100 units for July”, whereas multiplicative seasonality says “Multiply by 1.25 for July.”) Finally, you may not be using these types of methods at all.  Some practitioners will use a forecast method that simply averages prior periods (i.e., next June will be forecasted based on the average of the prior three Junes).

5. How do you go about choosing one model over another? Does the choice of technique depend on the type of demand data or when new demand data are available? Is this process automated? Or if a planner chooses a trend model subjectively, will that item continue to be forecasted with that model until the planner changes it again?

6. Are your forecasts “fully automatic,” so that trend and/or seasonality are detected automatically? Or are your forecasts dependent on item classifications that must be maintained by users? The latter requires more time and attention from planners to define what behavior constitutes trend, seasonality, etc.

7. What are the item classification rules used? For example, an item may be considered a trending item if demand increases by more than 5% period-over-period. An item may be considered seasonal if 70% or more of the annual demand occurs in four or fewer periods. Such rules are user-defined and often require overly broad assumptions. Sometimes they are configured when a system was originally implemented but never revised even as conditions change. It’s important to make sure any classification rules are understood and, if necessary, updated.

8. Does the forecast regenerate automatically when new data are available, or do you have to manually regenerate the forecasts?

9. Do you check for any change in forecast from one period to the next before deciding whether to use the new forecast? Or do you default to the new forecast?

10. How are forecast overrides that were made in prior planning cycles treated when a new forecast is created? Are they reused or replaced?

11. How do you incorporate forecasts made by your sales team or by your customers? Do these forecasts replace the baseline forecast, or do you use these inputs to make planner overrides to the baseline forecast?

12. Under what circumstances would you ignore the baseline forecast and use exactly what sales or customers are telling you?

13. If you rely on customer forecasts, what do you do about customers who don’t provide forecasts?

14. How do you document the effectiveness of your forecasting approach?  Most companies only measure the accuracy of the final forecast that is submitted to the ERP system, if they measure anything. But they don’t assess alternative predictions that might have been used. It is important to compare what you are doing to benchmarks. For example, do the methods you are using outperform a naïve forecast (i.e., “tomorrow equals today,” which requires no thought), or what you saw last year, or the average of the last 12 months.  Benchmarking your baseline forecast insures you are squeezing as much accuracy as possible out of the data.

15. Do you measure whether overrides from sales, customers, and planners are making the forecast better or worse? This is just as important as measuring whether your statistical approaches are outperforming the naïve method.  If you don’t know whether overrides are helping or hurting, the business can’t get better at forecasting – you need to know which steps are adding value so that you can do more of those and get even better. If you aren’t documenting forecast accuracy and conducting “forecast value add” analysis, then you aren’t able to properly assess whether the forecasts being produced are the best you could make.  You’ll miss opportunities to improve the process, increase accuracy, and educate the business on what type of forecast error is to be expected.