Simple is Good, Except When It Isn’t

In this blog, we are steering the conversation towards the transformative potential of technology in inventory management. The discussion centers around the limitations of simple thinking in managing inventory control processes and the necessity of adopting systematic software solutions. Dr. Tom Willemain highlights the contrast between Smart Software and the basic, albeit comfortable, approaches commonly employed by many businesses. These elementary methods, often favored for their ease of use and zero cost, are scrutinized for their inadequacies in addressing the dynamic challenges of inventory management.

​The importance of this subject lies in the critical role inventory management plays in a business’s operational efficiency and its direct impact on customer satisfaction and profitability. Dr. Tom Willemain points out the common pitfalls of relying on oversimplified rules of thumb, such as the whimsical nursery rhyme used by one company to determine reorder points, or the gut feel method, which depends on unquantifiable intuition rather than data. These approaches, while appealing in their simplicity, fail to adapt to market fluctuations, supplier reliability, or changes in demand, thus posing significant risks to the business. The video also critiques the practice of setting reorder points based on multiples of average demand, highlighting its disregard for demand volatility, a fundamental consideration in inventory theory.

Concluding, the presenter advocates for a more sophisticated, data-driven approach to inventory management. By leveraging advanced software solutions like those offered by Smart Software, businesses can accurately model complex demand patterns and stress-test inventory rules against numerous future scenarios. This scientific method allows for the setting of reorder points that account for real-world variability, thereby minimizing the risk of stockouts and the associated costs. The video emphasizes that while simple heuristics may be tempting for their ease of use, they are inadequate for today’s dynamic market conditions. The presenter encourages viewers to embrace technological solutions that offer professional-grade accuracy and adaptability, ensuring sustainable business success.

 

 

The Methods of Forecasting

​Demand planning and statistical forecasting software play a pivotal role in effective business management by incorporating features that significantly enhance forecasting accuracy. One key aspect involves the utilization of smoothing-based or extrapolative models, enabling businesses to quickly make predictions based solely on historical data. This foundation rooted in past performance is crucial for understanding trends and patterns, especially in variables like sales or product demand. Forecasting software goes beyond mere data analysis by allowing the blending of professional judgment with statistical forecasts, recognizing that forecasting is not a one-size-fits-all process. This flexibility enables businesses to incorporate human insights and industry knowledge into the forecasting model, ensuring a more nuanced and accurate prediction.

Features such as forecasting multiple items as a group, considering promotion-driven demand, and handling intermittent demand patterns are essential capabilities for businesses dealing with diverse product portfolios and dynamic market conditions.  Proper implementation of these applications empowers businesses with versatile forecasting tools, contributing significantly to informed decision-making and operational efficiency.

Extrapolative models

Our demand forecasting solutions support a variety of forecasting approaches including extrapolative or smoothing-based forecasting models, such as exponential smoothing and moving averages.  The philosophy behind these models is simple: they try to detect, quantify, and project into the future any repeating patterns in the historical data.

  There are two types of patterns that might be found in the historical data:

  • Trend
  • Seasonality

These patterns are illustrated in the following figure along with random data.

The Methods of Forecasting

 

Illustrating trending, seasonal, and random time series data

If the pattern is a trend, then extrapolative models such as double exponential smoothing and linear moving average estimates the rate of increase or decrease in the level of the variable and project that rate into the future.

If the pattern is seasonality, then models such as Winters and triple exponential smoothing estimate either seasonal multipliers or seasonal add factors and then apply these to projections of the nonseasonal portion of the data.

Very often, especially with retail sales data, both trend and seasonal patterns are involved. If these patterns are stable, they can be exploited to give very accurate forecasts.

Sometimes, however, there are no obvious patterns, so that plots of the data look like random noise. Sometimes patterns are clearly visible, but they change over time and cannot be relied upon to repeat. In these cases, the extrapolative models don’t try to quantify and project patterns. Instead, they try to average through the noise and make good estimates of the middle of the distribution of data values. These typical values then become the forecasts.  Sometimes, when users see a historical plot with lots of ups and downs they are concerned when the forecast doesn’t replicate those ups and downs. Normally, this should not be a reason for concern.  This occurs when the historical patterns aren’t strong enough to warrant using a forecasting method that would replicate the pattern.  You want to make sure your forecasts don’t suffer from the “wiggle effect” that is described in this blog post.

Past as a predictor of the future

The key assumption implicit in extrapolative models is that the past is a good guide to the future. This assumption, however, can break down. Some of the historical data may be obsolete. For example, the data might describe a business environment that no longer exists. Or, the world that the model represents may be ready to change soon, rendering all the data obsolete. Because of such complicating factors, the risks of extrapolative forecasting are lower when forecasting only a short time into the future.

Extrapolative models have the practical advantage of being cheap and easy to build, maintain and use. They require only accurate records of past values of the variables you need to forecast. As time goes by, you simply add the latest data points to the time series and reforecast. In contrast, the causal models described below require more thinking and more data. The simplicity of extrapolative models is most appreciated when you have a massive forecasting problem, such as making overnight forecasts of demand for all 30,000 items in inventory in a warehouse.

Judgmental adjustments

Extrapolative models can be run in a fully automatic mode with Demand Planner with no intervention required. Causal models require substantive judgment for wise selection of independent variables. However, both types of statistical models can be enhanced by judgmental adjustments. Both can profit from your insights.

Both causal and extrapolative models are built on historical data. However, you may have additional information that is not reflected in the numbers found in the historical record. For instance, you may know that competitive conditions will soon change, perhaps due to price discounts, or industry trends, or the emergence of new competitors, or the announcement of a new generation of your own products. If these events occur during the period for which you are forecasting, they may well spoil the accuracy of purely statistical forecasts. Smart Demand Planner’ graphical adjustment feature lets you include these additional factors in your forecasts through the process of on- screen graphical adjustment.

Be aware that applying user adjustments to the forecast is a two-edged sword. Used appropriately, it can enhance forecast accuracy by exploiting a richer set of information. Used promiscuously, it can add additional noise to the process and reduce accuracy. We advise that you use judgmental adjustments sparingly, but that you never blindly accept the predictions of a purely statistical forecasting method.  It is also very important to measure forecast value add.  That is, the value added to the forecast process by each incremental step.  For example, if you are applying overrides based on business knowledge, it is important to measure whether those adjustments are adding value by improving forecast accuracy.  Smart Demand Planner supports measurement of forecast value add by tracking every forecast considered and automating the forecast accuracy reports. You can select statistical forecasts, measure their errors, and compare them to the overridden ones.  By doing so, you inform the forecasting process so that better decisions can be made in the future. 

Multiple-level forecasts

Another common situation involves multiple-level forecasting, where there are multiple items being forecast as a group or there may even be multiple groups, with each group containing multiple items. We will generally call this type of forecasting Multilevel Forecasting. The prime example is product line forecasting, where each item is a member of a family of items, and the total of all the items in the family is a meaningful quantity.

For example, as in the following figure, you might have a line of tractors and want forecasts of sales for each type of tractor and for the entire tractor line.

The Methods of Forecasting 2

Illustrating multiple-level product forecasts

 Smart Demand Planner provides Roll Up/Roll Down Forecasting. This function is crucial for obtaining comprehensive forecasts of all product items and their group total. The Roll Down/Roll Up method within this feature offers two options for obtaining these forecasts:

Roll Up (Bottom-Up): This option initially forecasts each item individually and then aggregates the item-level forecasts to generate a family-level forecast.

Roll Down (Top-Down): Alternatively, the roll-down option starts by forming the historical total at the family level, forecasts it, and then proportionally allocates the total down to the item level.

When utilizing Roll Down/Roll Up, you have access to the full array of forecast methods provided by Smart Demand Planner at both the item and family levels. This ensures flexibility and accuracy in forecasting, catering to the specific needs of your business across different hierarchical levels.

Forecasting research has not established clear conditions favoring either the top-down or bottom-up approach to forecasting. However, the bottom-up approach seems preferable when item histories are stable, and the emphasis is on the trends and seasonal patterns of the individual items. Top-down is normally a better choice if some items have very noisy history or the emphasis is on forecasting at the group level. Since Smart Demand Planner makes it fast and easy to try both a bottom-up and a top- down approach, you should try both methods and compare the results.  You can use Smart Demand Planner’s “Hold back on Current”  feature in the “Forecast vs. Actual” to test both approaches on your own data and see which one yields a more accurate forecast for your business. 

 

Learning from Inventory Models

In this video blog, we explore the integral role that inventory models play in shaping the decision-making processes of professionals across various industries. These models, whether they are tangible computer simulations or intangible mental constructs, serve as critical tools in managing the complexities of modern business environments. The discussion begins with an overview of how these models are utilized to predict outcomes and streamline operations, emphasizing their relevance in a constantly evolving market landscape.

​The discussion further explores how various models distinctly influence strategic decision-making processes. For instance, the mental models professionals develop through experience often guide initial responses to operational challenges. These models are subjective, built from personal insights and past encounters with similar situations, allowing quick, intuitive decision-making. On the other hand, computer-based models provide a more objective framework. They use historical data and algorithmic calculations to forecast future scenarios, offering a quantitative basis for decisions that need to consider multiple variables and potential outcomes. This section highlights specific examples, such as the impact of adjusting order quantities on inventory costs and ordering frequency or the effects of fluctuating lead times on service levels and customer satisfaction.

In conclusion, while mental models provide a framework based on experience and intuition, computer models offer a more detailed and numbers-driven perspective. Combining both types of models allows for a more robust decision-making process, balancing theoretical knowledge with practical experience. This approach enhances the understanding of inventory dynamics and equips professionals with the tools to adapt to changes effectively, ensuring sustainability and competitiveness in their respective fields.

 

 

Looking for Trouble in Your Inventory Data

In this video blog, the spotlight is on a critical aspect of inventory management: the analysis and interpretation of inventory data. The focus is specifically on a dataset from a public transit agency detailing spare parts for buses. With over 13,700 parts recorded, the data presents a prime opportunity to delve into the intricacies of inventory operations and identify areas for improvement.

Understanding and addressing anomalies within inventory data is important for several reasons. It not only ensures the efficient operation of inventory systems but also minimizes costs and enhances service quality. This video blog explores four fundamental rules of inventory management and demonstrates, through real-world data, how deviations from these rules can signal underlying issues. By examining aspects such as item cost, lead times, on-hand and on-order units, and the parameters guiding replenishment policies, the video provides a comprehensive overview of the potential challenges and inefficiencies lurking within inventory data. 

We highlight the importance of regular inventory data analysis and how such an analysis can serve as a powerful tool for inventory managers, allowing them to detect and rectify problems before they escalate. Relying on antiquated approaches can lead to inaccuracies, resulting in either excess inventory or unfulfilled customer expectations, which in turn could cause considerable financial repercussions and inefficiencies in operations.

Through a detailed examination of the public transit agency’s dataset, the video blog conveys a clear message: proactive inventory data review is essential for maintaining optimal inventory operations, ensuring that parts are available when needed, and avoiding unnecessary expenditures.

Leveraging advanced predictive analytics tools like Smart Inventory Planning and Optimization will help you control your inventory data. Smart IP&O will show you decisive demand and inventory insights into evolving spare parts demand patterns at every moment, empowering your organization with the information needed for strategic decision-making.

 

 

Can Randomness be an Ally in the Forecasting Battle?

Feynman’s perspective illuminates our journey:  “In its efforts to learn as much as possible about nature, modern physics has found that certain things can never be “known” with certainty. Much of our knowledge must always remain uncertain. The most we can know is in terms of probabilities.” ― Richard Feynman, The Feynman Lectures on Physics.

When we try to understand the complex world of logistics, randomness plays a pivotal role. This introduces an interesting paradox: In a reality where precision and certainty are prized, could the unpredictable nature of supply and demand actually serve as a strategic ally?

The quest for accurate forecasts is not just an academic exercise; it’s a critical component of operational success across numerous industries. For demand planners who must anticipate product demand, the ramifications of getting it right—or wrong—are critical. Hence, recognizing and harnessing the power of randomness isn’t merely a theoretical exercise; it’s a necessity for resilience and adaptability in an ever-changing environment.

Embracing Uncertainty: Dynamic, Stochastic, and Monte Carlo Methods

Dynamic Modeling: The quest for absolute precision in forecasts ignores the intrinsic unpredictability of the world. Traditional forecasting methods, with their rigid frameworks, fall short in accommodating the dynamism of real-world phenomena. By embracing uncertainty, we can pivot towards more agile and dynamic models that incorporate randomness as a fundamental component. Unlike their rigid predecessors, these models are designed to evolve in response to new data, ensuring resilience and adaptability. This paradigm shift from a deterministic to a probabilistic approach enables organizations to navigate uncertainty with greater confidence, making informed decisions even in volatile environments.

Stochastic modeling guides forecasters through the fog of unpredictability with the principles of probability. Far from attempting to eliminate randomness, stochastic models embrace it. These models eschew the notion of a singular, predetermined future, presenting instead an array of possible outcomes, each with its estimated probability. This approach offers a more nuanced and realistic representation of the future, acknowledging the inherent variability of systems and processes. By mapping out a spectrum of potential futures, stochastic modeling equips decision-makers with a comprehensive understanding of uncertainty, enabling strategic planning that is both informed and flexible.

Named after the historical hub of chance and fortune, Monte Carlo simulations harness the power of randomness to explore the vast landscape of possible outcomes. This technique involves the generation of thousands, if not millions, of scenarios through random sampling, each scenario painting a different picture of the future based on the inherent uncertainties of the real world. Decision-makers, armed with insights from Monte Carlo simulations, can gauge the range of possible impacts of their decisions, making it an invaluable tool for risk assessment and strategic planning in uncertain environments.

Real-World Successes: Harnessing Randomness

The strategy of integrating randomness into forecasting has proven invaluable across diverse sectors. For instance, major investment firms and banks constantly rely on stochastic models to cope with the volatile behavior of the stock market. A notable example is how hedge funds employ these models to predict price movements and manage risk, leading to more strategic investment choices.

Similarly, in supply chain management, many companies rely on Monte Carlo simulations to tackle the unpredictability of demand, especially during peak seasons like the holidays. By simulating various scenarios, they can prepare for a range of outcomes, ensuring that they have adequate stock levels without overcommitting resources. This approach minimizes the risk of both stockouts and excess inventory.

These real-world successes highlight the value of integrating randomness into forecasting endeavors. Far from being the adversary it’s often perceived to be, randomness emerges as an indispensable ally in the intricate ballet of forecasting. By adopting methods that honor the inherent uncertainty of the future—bolstered by advanced tools like Smart IP&O—organizations can navigate the unpredictable with confidence and agility. Thus, in the grand scheme of forecasting, it may be wise to embrace the notion that while we cannot control the roll of the dice, we can certainly strategize around it.