Smart Software Announces Next-Generation Patent

Belmont, MA, June 2023 – Smart Software, Inc., provider of industry-leading demand forecasting, planning, and inventory optimization solutions, today announced the award of US Patent 11,656,887, “SYSTEM AND METHOD TO SIMULATE DEMAND AND OPTIMIZE CONTROL PARAMETERS FOR A TECHNOLOGY PLATFORM.”

The patent directs “technical solutions for analyzing historical demand data of resources in a technology platform to facilitate management of an automated process in the platform.” One important application is optimization of parts inventories.

Aspects of the invention include: an advanced bootstrap process that converts a single observed time series of item demand into an unlimited number of realistic demand scenarios; a performance prediction process that executes Monte Carlo simulations of a proposed inventory control policy to assess its performance; and a performance improvement process that uses the performance prediction process to automatically explore the space of alternative system designs to identify optimal control parameter values, selecting ones that minimize operating cost while guaranteeing a target level of item availability.

The new analytic technology described in the patent will form the basis for the upcoming release of the next generation (“Gen2”) of Smart Demand Planner™ and Smart IP&O™. Current customers and resellers can preview Gen2 by contacting their Smart Software sales representative.

Research underlying the patent was self-funded by Smart, supplemented by competitive Small Business Innovation Research grants from the US National Science Foundation.

 

About Smart Software, Inc.
Founded in 1981, Smart Software, Inc. is a leader in providing businesses with enterprise-wide demand forecasting, planning, and inventory optimization solutions.  Smart Software’s demand forecasting and inventory optimization solutions have helped thousands of users worldwide, including customers such as Disney, Arizona Public Service, Ameren, and The American Red Cross.  Smart’s Inventory Planning & Optimization Platform, Smart IP&O gives demand planners the tools to handle sales seasonality, promotions, new and aging products, multi-dimensional hierarchies, and intermittently demanded service parts and capital goods items.  It also provides inventory managers with accurate estimates of the optimal inventory and safety stock required to meet future orders and achieve desired service levels.  Smart Software is headquartered in Belmont, Massachusetts, and our website is www.smartcorp.com.

 

 

Correlation vs Causation: Is This Relevant to Your Job?

Outside of work, you may have heard the famous dictum “Correlation is not causation.” It may sound like a piece of theoretical fluff that, though involved in a recent Noble Prize in economics, isn’t relevant to your work as a demand planner. Is so, you may be only partially correct.

Extrapolative vs Causal Models

Most demand forecasting uses extrapolative models. Also called time-series models, these forecast demand using only the past values of an item’s demand. Plots of past values reveal trend and seasonality and volatility, so there is a lot they are good for. But there is another type of model – causal models —that can potentially improve forecast accuracy beyond what you can get from extrapolative models.

Causal models bring more input data to the forecasting task: information on presumed forecast “drivers” external to the demand history of an item. Examples of potentially useful causal factors include macroeconomic variables like the inflation rate, the rate of GDP growth, and raw material prices. Examples not tied to the national economy include industry-specific growth rates and your own and competitors’ ad spending.  These variables are usually used as inputs to regression models, which are equations with demand as an output and causal variables as inputs.

Forecasting using Causal Models

Many firms have an S&OP process that involves a monthly review of statistical (extrapolative) forecasts in which management adjusts forecasts based on their judgement. Often this is an indirect and subjective way to work causal models into the process without doing the regression modeling.

To actually make a causal regression model, first you have to nominate a list of potentially-useful causal predictor variables. These may come from your subject matter expertise. For example, suppose you manufacture window glass. Much of your glass may end up in new homes and new office buildings. So, the number of new homes and offices being built are plausible predictor variables in a regression equation.

There is a complication here: if you are using the equation to predict something, you must first predict the predictors. For example, sales of glass next quarter may be strongly related to numbers of new homes and new office buildings next quarter. But how many new homes will there be next quarter? That’s its own forecasting problem. So, you have a potentially powerful forecasting model, but you have extra work to do to make it usable.

There is one way to simplify things: if the predictor variables are “lagged” versions of themselves. For example, the number of new building permits issued six months ago may be a good predictor of glass sales next month. You don’t have to predict the building permit data – you just have to look it up.

Is it a causal relationship or just a spurious correlation?

Causal models are the real deal: there is an actual mechanism that relates the predictor variable to the predicted variable. The example of predicting glass sales from building permits is an example.

A correlation relationship is more iffy. There is a statistical association that may or may not provide a solid basis for forecasting. For example, suppose you sell a product that happens to appeal most strongly to Dutch people but you don’t realize this. The Dutch are, on average, the tallest people in Europe. If your sales are increasing and the average height of Europeans is increasing, you might use that relationship to good effect. However, if the proportion of Dutch in the Euro zone is decreasing while the average height is increasing because the mix of men versus women is shifting toward men, what can go wrong? You will expect sales to increase because average height is increasing. But your sales are really mostly to the Dutch, and their relative share of the population is shrinking, so your sales are really going to decrease instead. In this case the association between sales and customer height is a spurious correlation.

How can you tell the difference between true and spurious relationships? The gold standard is to do a rigorous scientific experiment. But you are not likely to be in position to do that. Instead, you have to rely on your personal “mental model” of how your market works. If your hunches are right, then your potential causal models will correlate with demand and causal modeling will pay off for you, either to supplement extrapolative models or to replace them.

 

 

 

 

The Role of Trust in the Demand Forecasting Process Part 2: What do you Trust

“Regardless of how much effort is poured into training forecasters and developing elaborate forecast support systems, decision-makers will either modify or discard the predictions if they do not trust them.”  — Dilek Onkal, International Journal of Forecasting 38:3 (July-September 2022), p.802.

The words quoted above grabbed my attention and prompted this post. Those of a geekly persuasion, like your blogger, are inclined to think of forecasting as a statistical problem. While that is obviously true, those of a certain age, like your blogger, understand that forecasting is also a social activity and therefore has a large human component.

What Do You Trust?

There is a related dimension of trust: not who do you trust but what do you trust? By this, I mean both data and software.

Trust in Data

Trust in data underpins trust in the forecaster using the data. Most of our customers have their data in an ERP system. This data must be understood as a key corporate asset. For the data to be trustworthy, it must have the “three C’s”, i.e., it must be correct, complete, and current.

Correctness is obviously fundamental. We once had a customer who was implementing a new, strong forecasting process, but found the results completely at odds with their sense of what was happening in the business. It turned out that several of their data streams were incorrect by a factor of two, which is a huge error. Of course, this set back the implementation process until they could identify and correct all the gross errors in their demand data.

There is a less obvious point to be made about correctness. That is, data are random, so what you see now is not likely to be what you see next. Planning production based on the assumption that next week’s demand will be exactly the same as this week’s demand is clearly foolish, but classical formula-based forecasting models like the exponential smoothing mentioned above will project the same number throughout the forecast horizon. This is where scenario-based planning is essential for coping with the inevitable fluctuations in key variables such as customers’ demands and suppliers’ replenishment lead times.

Completeness is the second requirement for data to be trusted. Our software ultimately gets much of its value from exposing the links between operational decisions (e.g., selecting the reorder points governing replenishment of stock) and business-related metrics like inventory costs. Yet often implementation of forecasting software is delayed because item demand information is available someplace, but holding, ordering and/or shortage costs are not.  Or, to cite another recent example, a customer was able to properly size only half their inventory of spares for reparable parts because nobody had been tracking when the other half was breaking down, meaning there was no information on mean time before failure (MTBF), meaning it was not possible to model the breakdown behavior of half the fleet of reparable spares.

Finally, the currency of data matters. As the speed of business increases and company planning cycles drop from a quarterly or monthly tempo to a weekly or daily tempo, it becomes desirable to exploit the agility provided by overnight uploads of daily transactional data into the cloud. This allows high-frequency adjustments of forecasts and/or inventory control parameters for items that experience high volatility and sudden shifts in demand. The fresher the data, the more trustworthy the analysis.

Trust in Demand Forecasting Software

Even with high-quality data, forecasters must still trust the analytical software that processes the data. This trust must extend to both the software itself and to the computational environment in which it functions.

If forecasters used on-premises software, they must rely on their own IT departments to safeguard the data and keep it available for use. If they wish instead to exploit the power of cloud-based analytics, customers must trust their confidential information to their software vendors. Professional-level software, such as ours, justifies customers’ trust through SOC 2 certification. SOC 2 certification was developed by the American Institute of CPAs and defines criteria for managing customer data based on five “trust service principles”—security, availability, processing integrity, confidentiality, and privacy.

What about the software itself? What is needed to make it trustworthy? The main criteria here are the correctness of algorithms and functional reliability. If the vendor has a professional program development process, there will be little chance that the software ends up computing the wrong numbers because of a programming error. And if the vendor has a rigorous quality assurance process, there will be little chance that the software will crash just when the forecaster is on deadline or must deal with a pop-up analysis for a special situation.

Summary

To be useful, forecasters and their forecasts must be trusted by decision-makers. That trust depends on characteristics of forecasters and their processes and communication. It also depends on the quality of the data and software used in creating the forecasts.

 

Read the 1st part of this Blog “Who do you Trust” here: https://smartcorp.com/forecasting/the-role-of-trust-in-the-demand-forecasting-process-part-1-who/

 

 

 

Service Level Driven Planning for Service Parts Businesses in the Dynamics 365 space

Service-Level-Driven Service Parts Planning for Microsoft Dynamics BC or F&SC is a four-step process that extends beyond simplified forecasting and rule-of-thumb safety stocks. It provides service parts planners with data-driven, risk-adjusted decision support.

 

The math to determine this level of planning simply does not exist in D365 functionality.  It requires math and AI that passes thousands of times through calculations for each part and part center (locations).  Math and AI like this are unique to Smart.  To understand more, please read on. 

 

Step 1. Ensure that all stakeholders agree on the metrics that matter. 

All participants in the service parts inventory planning process must agree on the definitions and what metrics matter most to the organization. Service Levels detail the percentage of time you can completely satisfy required usage without stocking out. Fill Rates detail the percentage of the requested usage that is immediately filled from stock. (To learn more about the differences between service levels and fill rate, watch this 4-minute lesson here.) Availability details the percentage of active spare parts with an on-hand inventory of at least one unit. Holding costs are the annualized costs of holding stock accounting for obsolescence, taxes, interest, warehousing, and other expenses. Shortage costs are the cost of running out of stock, including vehicle/equipment downtime, expedites, lost sales, and more. Ordering costs are the costs associated with placing and receiving replenishment orders.

 

Step 2. Benchmark historical and predicted current service level performance.

All participants in the service parts inventory planning process must hold a common understanding of predicted future service levels, fill rates, and costs and their implications for your service parts operations. It is critical to measure both historical Key Performance Indicators (KPIs) and their predictive equivalents, Key Performance Predictions (KPPs).  Leveraging modern software, you can benchmark past performance and leverage probabilistic forecasting methods to simulate future performance.  Virtually every Demand Planning solution stops here.  Smart goes further by stress-testing your current inventory stocking policies against all plausible future demand scenarios.  It is these thousands of calculations that build our KPPs.  The accuracy of this improves D365’s ability to balance the costs of holding too much with the costs of not having enough. You will know ahead of time how current and proposed stocking policies are likely to perform.

 

Step 3. Agree on targeted service levels for each spare part and take proactive corrective action when targets are predicted to miss. 

Parts planners, supply chain leadership, and the mechanical/maintenance teams should agree on the desired service level targets with a full understanding of the tradeoffs between stockout risk and inventory cost.  A call out here is that our D365 customers are almost always stunned by the stocking levels difference between 100% and 99.5% availability.   With the logic for nearly 10,000 scenarios that half a percent outage is almost never hit.   You achieve full stocking policy with much lower costs.   You find the parts that are understocked and correct those.  The balancing point is often a 7-12% reduction in inventory costs. 

This leveraging of what-if scenarios in our parts planning software gives management and buyers the ability to easily compare alternative stocking policies and identify those that best meet business objectives.  For some parts, a small stock out is okay.  For others, we need that 99.5% parts availability.  Once these limits are agreed upon, we use the Power of D365 to optimize inventory using D365 core ERP as it should be.   The planning is automatically uploaded to engage Dynamics with modified reorder points, safety stock levels, and/or Min/Max parameters.  This supports a single Enterprise center point, and people are not using multiple systems for their daily parts management and purchasing.

 

Step 4. Make it so and keep it so. 

Empower the planning team with the knowledge and tools it needs to ensure that you strike agreed-upon balance between service levels and costs.  This is critical and important.  Using Dynamics F&SC or BC to execute your ERP transactions is also important.  These two Dynamics ERPs have the highest level of new ERP growth on the planet.  Using them as they are intended to be used makes sense.   Filling the white space for the math and AI calculations for Maintenance and Parts management also makes sense.  This requires a more complex and targeted solution to help.  Smart Software Inventory Optimization for EAM and Dynamics ERPs holds the answer.    

Remember: Recalibration of your service parts inventory policy is preventive maintenance against both stockouts and excess stock.  It helps costs, frees capital for other uses, and supports best practices for your team. 

 

Extend Microsoft 365 F&SC and AX with Smart IP&O

To see a recording of the Microsoft Dynamics Communities Webinar showcasing Smart IP&O, register here:

https://smartcorp.com/inventory-planning-with-microsoft-365-fsc-and-ax/

 

 

 

 

The Role of Trust in the Demand Forecasting Process Part 1: Who do you Trust

 

“Regardless of how much effort is poured into training forecasters and developing elaborate forecast support systems, decision-makers will either modify or discard the predictions if they do not trust them.”  — Dilek Onkal, International Journal of Forecasting 38:3 (July-September 2022), p.802.

The words quoted above grabbed my attention and prompted this post. Those of a geekly persuasion, like your blogger, are inclined to think of forecasting as a statistical problem. While that is obviously true, those of a certain age, like your blogger, understand that forecasting is also a social activity and therefore has a large human component.

Who Do You Trust?

Trust is always a two-way street, but let’s stay on the demand forecaster’s side. What characteristics of and actions by forecasters and demand planners build trust in their work? The above quoted Professor Onkal reviewed academic research on this topic going back to 2006. She summarized results from practitioner surveys that identified key trust factors related to forecaster characteristics, forecasting process, and forecasting communication.

Forecaster characteristics

Key to building trust among the users of forecasts are perceptions of forecaster and demand planner competence and objectivity. Competence has a mathematical component, but many managers confuse computer skills with analytic skills, so users of forecasting software can usually clear this hurdle. However, since the two are not the same, it pays dividends to absorb your vendor’s training and learn not just the math but the lingo of your forecasting software. In my observation, trust can also be increased by showing knowledge of the company’s business.

Objectivity is also a key to trustworthiness. It may be uncomfortable for the forecaster to be put in the middle of occasional departmental squabbles, but those will come up and must be handled with tact. Squabbles? Well, silos exist and tilt in different directions. Sales departments favor higher demand forecasts that drive production increases, so that they never have to say “Sorry, we are fresh out of that.” Inventory managers are wary of high demand forecasts, because “excess enthusiasm” can leave them holding the bag, sitting on bloated inventory.

Sometimes the forecaster becomes a de facto referee, and in this role must display overt signs of objectivity. That can mean first recognizing that every management decision involves tradeoffs of good things against other good things, e.g., product availability versus lean operations, and then helping the parties strike a painful but tolerable balance by surfacing the links between operational decisions and the key performance metrics that matter to folks like Chief Financial Officers.

The Forecasting process

The forecasting process can be thought of as having three phases: data inputs, calculations, and outputs. Actions can be taken to increase trust in each phase.

 

Regarding inputs:

Trust can be increased if obviously relevant inputs are at least acknowledged if not directly used in calculations. Thus, factors like social media sentiment and regional sales managers’ gut instincts can be legitimate parts of a forecast consensus process. However, objectivity requires that these putative predictors of profit be tested objectively. For instance, a professional-grade forecasting process may well include subjective adjustment to statistical forecasts but must then also assess whether the adjustments actually end up improving accuracy, not just making some people feel listened to.

Regarding the second phase, calculations:

The forecaster will be trusted to the extent that they are able to deploy more than one way to calculate forecasts and then articulate a good reason why they chose the method eventually used. In addition, the forecaster should be able to explain in accessible language how even complicated techniques do their job. It is difficult to put trust in a “black box” method that is so opaque as to be inscrutable. The importance of explainability is amplified by the fact of life that the forecaster’s superior must themselves in turn be able to justify the choice of technique to their supervisor.

For instance, exponential smoothing uses this equation: S(t) = αX(t)+(1-α)S(t-1). Many forecasters are familiar with this equation, but many forecast users are not. There is a story that explains the equation in terms of averaging irrelevant “noise” in an item’s demand history and the need to strike a balance between smoothing out noise and being able to react to sudden shifts in the level of demand. The forecaster who can tell that story will be more credible. (My own version of that story uses phrases from sports, i.e., “head fakes” and “jukes”. Finding folksy analogs appropriate to your specific audience always pays dividends.)

A final point: best practice demands that any forecast be accompanied by an honest assessment of its uncertainty. A forecaster who tries to build trust by being overly specific (“Sales next quarter will be 12,184 units”) will always fail. A forecaster who says “Sales next quarter will have a 90% chance of falling between 12,000 and 12,300 units” will be both correct more often and  also more helpful to decision makers. After all, forecasting is essentially a job of risk management, so the decision maker is best served by knowing the risks.

Forecasting communication:

Finally, consider the third phase, communication of forecast results. Research suggests that continual communication with forecast users builds trust. It avoids those horrible, deflating moments when a nicely formatted report is shot down because of some fatal flaw that could have been foreseen: “This is no good because you didn’t take account of X, Y or Z” or “We really wanted you to present results rolled up to the top of the product hierarchies (or by sales region or by product line or…)”.

Even when everybody is aligned as to what is expected, trust is enhanced by presenting results using well-crafted graphics, with massive numerical tables provided for backup but not as the main way of communicating results. My experience has been that, just as a meeting-control device, a graph is usually much better than a large numerical table. With a graph, everybody’s attention is focused on the same thing and many aspects of the analysis are immediately (and literally) visible. With a table of results, the table of participants often splinters into side conversations in which each voice is focused on different pieces of the table.

Onkal summarizes the research this way: “Take-aways for those who make forecasts and those who use them converge around clarity of communication as well as perceptions of competence and integrity.”

What Do You Trust?

There is a related dimension of trust: not who do you trust but what do you trust? By this I mean both data and software….  Read the 2nd part of this Blog “What do you Trust” here  https://smartcorp.com/forecasting/the-role-of-trust-in-the-demand-forecasting-process-part-2-what/