The top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts

We often encounter Excel-based reorder point planning methods.  In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart.  We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet.

Their monthly process consisted of updating a new month of actuals into the “reorder point sheet.”  An embedded formula recomputed the Reorder Point (ROP) and order-up-to (Max) level.  It worked like this:

  • ROP = LT Demand + Safety Stock
  • LT Demand = average daily demand x lead time days (assumed constant to keep things simple)
  • Safety Stock for long lead time parts = Standard deviation x 2.0
  • Safety Stock for short lead time parts = Standard deviation x 1.2
  • Max = ROP + supplier-dictated Minimum Order Quantity

Historical averages and standard deviations used 52-weeks of rolling history (i.e., the newest week replaced the oldest week each period).  The standard deviation of demand was computed using the “stdevp” function in Excel.

Every month, a new ROP was recomputed. Both the average demand and standard deviation were modified by the new week’s demand, which in turn updated the ROP.

The default ROP is always based on the above logic. However, planners would make changes under certain conditions:

1. Planners would increase the Min for inexpensive parts to reduce risk of taking an on-time delivery hit (OTD) on an inexpensive part.

2. The Excel sheet identified any part with a newly calculated ROP that was ± 20% different from the current ROP.

3. Planners reviewed parts that exceed the exception threshold, proposed changes, and got a manager to approve.

4. Planners reviewed items with OTD hits and increased the ROP based on their intuition. Planners continued to monitor those parts for several periods and lowered the ROP when they felt it is safe.

5. Once the ROP and Max quantity were determined, the file of revised results was sent to IT, who uploaded into their ERP.

6. The ERP system then managed daily replenishment and order management.

Objectively, this was perhaps an above-average approach to inventory management. For instance, some companies are unaware of the link between demand variability and safety stock requirements and rely on rule of methods or intuition exclusively.  However,  there are problems with their approach:

1. Manual data updates
The spreadsheets required manual updating. To recompute, multiple steps were required, each with their own dependency. First, a data dump needed to be run from the ERP system.  Second, a planner would need to open the spreadsheet and review it to make sure the data imported properly.  Third, they needed to review output to make sure it calculated as expected.  Fourth, manual steps were required to push the results back to the ERP system.

2. One Size Fits All Safety Stock
Or in this case, “one of two sizes fit all”. The choice of using 2x and 1.2x standard deviation for long and short lead time items respectively equates to service levels of 97.7% and 88.4%.    This is a big problem since it stands to reason that not every part in each group requires the same service level.  Some parts will have higher stock out pain than others and vice versa. Service levels should therefore be specified accordingly and be commensurate with the importance of the item.  We discovered that they were experiencing OTD hits on roughly 20% of their critical spare parts which necessitated manual overrides of the ROP.  The root cause was that on all short lead time items they they were planning for an 88.4% service level target. So, the best they could have gotten was to stock out 12% of the time even if “on plan.”   It would have been better to plan service level targets according to the importance of the part.

3. Safety stock is inaccurate.  The items being planned for this company are spare parts to support diagnostic equipment.  The demand on most of these parts is very intermittent and sporadic.  So, the choice of using an average to compute lead time demand wasn’t unreasonable if you accept the need for ignoring variability in lead times.  However, the reliance on a Normal distribution to determine the safety stock was a big mistake that resulted in inaccurate safety stocks.  The company stated that its service levels for long lead time items ran in the 90% range compared to their target of 97.7%, and that they made up the difference with expedites.  Achieved service levels for shorter lead time items were about 80%, despite being targeted for 88.4%.    They computed safety stock incorrectly because their demand isn’t “bell shaped” yet they picked safety stocks assuming they were.  This simplification results in missing service level targets, forcing the manual review of many items that then need to be manually “monitored for several periods” by a planner.  Wouldn’t it be better to make sure the reorder point met the exact service level you wanted from the start?  This would ensure you hit your service levels while minimizing unneeded manual intervention.

There is a fourth issue that didn’t make the list but is worth mentioning.  The spreadsheet was unable to track trend or seasonal patterns.  Historical averages ignore trend and seasonality, so the cumulative demand over lead time used in the ROP will be substantially less accurate for trending or seasonal parts. The planning team acknowledged this but didn’t feel it was a legitimate issue, reasoning that most of the demand was intermittent and didn’t have seasonality.  It is important for the model to pick up on trend and seasonality on intermittent data if it exists, but we didn’t find their data exhibited these patterns.  So, we agreed that this wasn’t an issue for them.  But as planning tempo increases to the point that demand is bucketed daily, even intermittent demand very often turns out to have day-of-week and sometimes week-of-month seasonality. If you don’t run at a higher frequency now, be aware that you may be forced to do so soon to keep up with more agile competition. At that point, spreadsheet-based processing will just not be able to keep up.

In conclusion, don’t use spreadsheets. They are not conducive to meaningful what-if analyses, they are too labor-intensive, and the underlying logic must be dumbed down to process quickly enough to be useful.  In short, go with purpose-built solutions. And make sure they run in the cloud.

 

Spare Parts Planning Software solutions

Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

 

 

White Paper: What you Need to know about Forecasting and Planning Service Parts

 

This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

 

    How to interpret and manipulate forecast results with different forecast methods

    Smart IP&O is powered by the SmartForecasts® forecasting engine that automatically selects the most appropriate method for each item.  Smart Forecast methods are listed below:

    • Simple Moving Average and Single Exponential Smoothing for flat, noisy data
    • Linear Moving Average and Double Exponential Smoothing for trending data
    • Winters Additive and Winters Multiplicative for seasonal and seasonal & trending data.

    This blog explains how each model works using time plots of historical and forecast data.  It outlines how to go about choosing which model to use.   The examples below show the same history, in red, forecasted with each method, in dark green, compared to the Smart-chosen winning method, in light green.

     

    Seasonality
    If you want to force (or prevent) seasonality to show in the forecast, then choose Winters models.  Both methods require 2 full years of history.

    `Winter’s multiplicative will determine the size of the peaks or valleys of seasonal effects based on a percentage difference from a trending average volume.  It is not a good fit for very low volume items due to division by zero when determining that percentage. Note in the image below that the large percentage drop in seasonal demand in the history is being projected to continue over the forecast horizon making it look like there isn’t any seasonal demand despite using a seasonal method.

     

    Winter’s multiplicative Forecasting method software

    Statistical forecast produced with Winter’s multiplicative method. 

     

    Winter’s additive will determine the size of the peaks or valleys of seasonal effects based on a unit difference from the average volume.  It is not a good fit if there’s significant trend to the data.  Note in the image below that seasonality is now being forecasted based on the average unit change in seasonality. So, the forecast still clearly reflects the seasonal pattern despite the down trend in both the level and seasonal peaks/valleys.

    Winter’s additive Forecasting method software

    Statistical forecast produced with Winter’s additive method.

     

    Trend

    If you want to force (or prevent) trend up or down to show in the forecast, then restrict the chosen methods to (or remove the methods of) Linear Moving Average and Double Exponential Smoothing.

     Double exponential smoothing will pick up on a long-term trend.  It is not a good fit if there are few historical data points.

    Double exponential smoothing Forecasting method software

    Statistical forecast produced with Double Exponential Smoothing

     

    Linear moving average will pick up on nearer term trends.  It is not a good fit for highly volatile data

    Linear moving average Forecasting method software

     

    Non-Trending and Non-Seasonal Data
    If you want to force (or prevent) an average from showing in the forecast, then restrict the chosen methods to (or remove the methods of) Simple Moving Average and Single Exponential Smoothing.

    Single exponential smoothing will weigh the most recent data more heavily and produce a flat-line forecast.  It is not a good fit for trending or seasonal data.

    Single exponential smoothing Forecasting method software

    Statistical forecast using Single Exponential Smoothing

    Simple moving average will find an average for each period, sometimes appearing to wiggle, and better for longer-term averaging.  It is not a good fit for trending or seasonal data.

    Simple moving average Forecasting method software

    Statistical forecast using Simple Moving Average

     

     

     

    Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning

    I’ll bet your maintenance and repair teams would be ok with incurring higher stock out risks one some spare parts if they knew that the inventory reduction savings would be used to spread out the inventory investment more effectively to other parts and boost overall service levels.

    I’ll double down that your Finance team, despite always being challenged with lowering costs, would support a healthy inventory increase if they could clearly see that the revenue benefits from increased uptime, fewer expedites, and service level improvements clearly outweighed the additional inventory costs and risk.

    A spare parts tradeoff curve will enable service parts planning teams to properly communicate the risks and costs of each inventory decision.  It is mission critical for parts planning and the only way to adjust stocking parameters proactively and accurately for each part.  Without it, planners, for all intents and purposes, are “planning” with blinders on because they won’t be able to communicate the true tradeoffs associated with stocking decisions.

    For example, if a proposed increase to the min/max levels of an important commodity group of service parts is recommended, how do you know whether the increase is too high or too low or just right?  How can you fine-tune the change for thousands of spares?  You won’t and you can’t.  Your inventory decision making will rely on reactive, gut feel, and broad-brush decisions causing service levels to suffer and inventory costs to balloon.

    So, what exactly is a spare parts tradeoff curve anyway?

    It’s a fact-based, numerically driven prediction that details how changes in stocking levels will influence inventory value, holding costs, and service levels.  For each unit change in inventory level there is a cost and a benefit.  The spare parts tradeoff curve identifies these costs and benefits across different stocking levels. It lets planners discover the stock level that best balances the costs and benefits for each individual item.

    Here are two simplified examples. In Figure 1, the spare parts tradeoff curve shows how the service level (probability of not stocking out) changes depending on the reorder level.  The higher the reorder level, the lower the stockout risk.  It is critical to know how much service you are gaining given the inventory investment.  Here you may be able to justify that an inventory increase from a reorder point of 35 to 45 is well worth the investment of 10 additional units of stock because service levels jumps from just under 70% to 90%, cutting your stockout risk for the spare part from 30% to 10%!

     

    Cost vs Service Levels for inventory planning

    Figure 1: Cost versus Service Level

     

    Size of Inventory vs Service Levels for MRO

    Figure 2: Service Level versus Size of Inventory

    In this example (Figure 2), the tradeoff curve exposes a common problem with spare parts inventory.  Often stock levels are so high that they generate negative returns.  After a certain stocking quantity, each additional unit of stock does not buy more benefit in the form of a higher service level.  Inventory decreases can be justified when it is clear the stock level is well past the point of diminishing returns. An accurate tradeoff curve will expose the point where it is no longer advantageous to add stock.

    By leveraging #probabilisticforecasting to drive parts planning, you can communicate these tradeoffs accurately, do so at scale across hundreds of thousands of parts, avoid bad inventory decisions, and balance service levels and costs.  At Smart Software, we specialize in helping spare parts planners, Directors of Materials Management, and financial executives managing MRO, spare parts, and aftermarket parts to understand and exploit these relationships.

     

    Spare Parts Planning Software solutions

    Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

    Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

     

     

    White Paper: What you Need to know about Forecasting and Planning Service Parts

     

    This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

     

      What to do when a statistical forecast doesn’t make sense

      Sometimes a statistical forecast just doesn’t make sense.  Every forecaster has been there.  They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history.   When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process.

      This blog will help a layman understand what the Smart statistical models are and how they are chosen automatically.  It will address how that choice sometimes fails, how you can know if it did, and what you can do to ensure that the forecasts can always be justified.  It’s important to know to expect, and how to catch the exceptions so you can rely on your forecasting system.

       

      How methods are chosen automatically

      The criteria to automatically choose one statistical method out of a set is based on which method came closest to correctly predicting held-out history.  Earlier history is passed to each method and the result is compared to actuals to find the one that came closest overall.  That automatically chosen method is then fed all the history to produce the forecast. Check out this blog to learn more about the model selection https://smartcorp.com/uncategorized/statistical-forecasting-how-automatic-method-selection-works/

      For most time series, this process can capture trends, seasonality, and average volume accurately. But sometimes a chosen method comes mathematically closest to predicting the held-out history but doesn’t project it forward in a way that makes sense.  That means the system selected method isn’t best and for some “hard to forecast”

       

      Hard to forecast items

      Hard to forecast items may have large, unpredictable spikes in demand, or typically no demand but random irregular blips, or unusual recent activity.  Noise in the data sometimes randomly wanders up or down, and the automated best-pick method might forecast a runaway trend or a grind into zero.  It will do worse than common sense and in a small percentage of any reasonably varied group of items.  So, you will need to identify these cases and respond by overriding the forecast or changing the forecast inputs.

       

      How to find the exceptions

      Best practice is to filter or sort the forecasted items to identify those where the sum of the forecast over the next year is significantly different than the corresponding history last year.  The forecast sum may be much lower than the history or vice versa.  Use supplied metrics to identify these items; then you can choose to apply overrides to the forecast or modify the forecast settings.

       

      How to fix the exceptions

      Often when the forecast seems odd, an averaging method, like Single Exponential Smoothing or even a simple average using Freestyle, will produce a more reasonable forecast.  If trend is possibly valid, you can remove only seasonal methods to avoid a falsely seasonal result.  Or do the opposite and use only seasonal methods if seasonality is expected but wasn’t projected in the default forecast.  You can use the what-if features to create any number of forecasts, evaluate & compare, and continue to fine tune the settings until you are comfortable with the forecast.

      Cleaning the history, with or without changing the automatic method selection, is also effective at producing reasonable forecasts. You can embed forecast parameters to reduce the amount of history used to forecast those items or the number of periods passed into the algorithm so earlier, outdated history is no longer considered.  You can edit spikes or drops in the demand history that are known anomalies so they don’t influence the outcome.  You can also work with the Smart team to implement automatic outlier detection and removal so that data prior to being forecasted is already cleansed of these anomalies.

      If the demand is truly intermittent, it is going to be nearly impossible to forecast “accurately” per period. If a level-loading average is not acceptable, handling the item by setting inventory policy with a lead time forecast can be effective.  Alternatively, you may choose to use “same as last year” models which while not prone to accuracy will be generally accepted by the business given the alternatives forecasts.

      Finally, if the item was introduced so recently that the algorithms do not have enough input to accurately forecast, a simple average or manual forecast may be best.  You can identify new items by filtering on the number of historical periods.

       

      Manual selection of methods

      Once you have identified rows where the forecast doesn’t make sense to the human eye, you can choose a smaller subset of all methods to allow into the forecast run and compare to history.  Smart will allow you to use a restricted set of methods just for one forecast run or embed the restricted set to use for all forecast runs going forward. Different methods will project the history into the future in different ways.  Having a sense of how each works will help you choose which to allow.

       

      Rely on your forecasting tool

      The more you use Smart period over period to embed your decisions about how to forecast and what historical data to consider, the less often you will face exceptions as described in this blog.  Entering forecast parameters is a manageable task when starting with critical or high impact items.  Even if you don’t embed any manual decisions on forecast methods, the forecast re-runs every period with new data. So, an item with an odd result today can become easily forecastable in time.

       

       

      Spare Parts Planning Isn’t as Hard as You Think

      When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think.

      This conclusion is informed by hundreds of software implementations we’ve directed over the years. Customers managing spare parts and service parts (the latter for internal consumption/MRO), and to a lesser degree aftermarket parts (for resale to installed bases), have consistently implemented our parts planning software faster than their peers in manufacturing and distribution.

      The primary reason is the role in manufacturing and distribution of business knowledge about what might happen in the future. In a traditional B2B manufacturing and distribution environment, there are customers and sales and marketing teams selling to those customers. There are sales goals, revenue expectations, and budgets. This means there is a lot of business knowledge about what will be purchased, what will be promoted, whose opinions need to be accounted for. A complex planning loop is required. In contrast, when managing spare parts, you have a maintenance team that fixes equipment when it breaks. Though there are often maintenance schedules for guidance, what is needed beyond a standard list of consumable parts is often unknown until a maintenance person is on-site. In other words, there just isn’t the same sort of business knowledge available to parts planners when making stocking decisions.

      Yes, that is a disadvantage, but it also has an upside: there is no need to produce a period-by-period consensus demand forecast with all the work that requires. When planning spare parts, you can usually skip many steps required for a typical manufacturer, distributor, or retailer. These skippable steps include:  

      1. Building forecasts at different levels of the business, such as product family or region.
      2. Sharing the demand forecast with sales, marketing, and customers.
      3. Reviewing forecast overrides from sales, marketing, and customers.
      4. Agreeing on a consensus forecast that combines statistics and business knowledge.
      5. Measuring “forecast value add” to determine if overrides make the forecast more accurate.
      6. Adjusting the demand forecast for known future promotions.
      7. Accounting for cannibalization (i.e., if I sell more of product A, I’ll sell less of product B).

      Freed from a consensus-building process, spare parts planners and inventory managers can rely directly on their software to predict usage and the required stocking policies. If they have access to a field-proven solution that addresses intermittent demand, they can quickly “go live” with more accurate demand forecasts and estimates of reorder points, safety stocks, and order suggestions.  Their attention can be focused on getting accurate usage and supplier lead time data. The “political” part of the job can be limited to obtaining organization consensus on service level targets and inventory budgets.

      Spare Parts Planning Software solutions

      Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

      Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

       

       

      White Paper: What you Need to know about Forecasting and Planning Service Parts

       

      This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

       

        The Role of Trust in the Demand Forecasting Process Part 2: What do you Trust

        “Regardless of how much effort is poured into training forecasters and developing elaborate forecast support systems, decision-makers will either modify or discard the predictions if they do not trust them.”  — Dilek Onkal, International Journal of Forecasting 38:3 (July-September 2022), p.802.

        The words quoted above grabbed my attention and prompted this post. Those of a geekly persuasion, like your blogger, are inclined to think of forecasting as a statistical problem. While that is obviously true, those of a certain age, like your blogger, understand that forecasting is also a social activity and therefore has a large human component.

        What Do You Trust?

        There is a related dimension of trust: not who do you trust but what do you trust? By this, I mean both data and software.

        Trust in Data

        Trust in data underpins trust in the forecaster using the data. Most of our customers have their data in an ERP system. This data must be understood as a key corporate asset. For the data to be trustworthy, it must have the “three C’s”, i.e., it must be correct, complete, and current.

        Correctness is obviously fundamental. We once had a customer who was implementing a new, strong forecasting process, but found the results completely at odds with their sense of what was happening in the business. It turned out that several of their data streams were incorrect by a factor of two, which is a huge error. Of course, this set back the implementation process until they could identify and correct all the gross errors in their demand data.

        There is a less obvious point to be made about correctness. That is, data are random, so what you see now is not likely to be what you see next. Planning production based on the assumption that next week’s demand will be exactly the same as this week’s demand is clearly foolish, but classical formula-based forecasting models like the exponential smoothing mentioned above will project the same number throughout the forecast horizon. This is where scenario-based planning is essential for coping with the inevitable fluctuations in key variables such as customers’ demands and suppliers’ replenishment lead times.

        Completeness is the second requirement for data to be trusted. Our software ultimately gets much of its value from exposing the links between operational decisions (e.g., selecting the reorder points governing replenishment of stock) and business-related metrics like inventory costs. Yet often implementation of forecasting software is delayed because item demand information is available someplace, but holding, ordering and/or shortage costs are not.  Or, to cite another recent example, a customer was able to properly size only half their inventory of spares for reparable parts because nobody had been tracking when the other half was breaking down, meaning there was no information on mean time before failure (MTBF), meaning it was not possible to model the breakdown behavior of half the fleet of reparable spares.

        Finally, the currency of data matters. As the speed of business increases and company planning cycles drop from a quarterly or monthly tempo to a weekly or daily tempo, it becomes desirable to exploit the agility provided by overnight uploads of daily transactional data into the cloud. This allows high-frequency adjustments of forecasts and/or inventory control parameters for items that experience high volatility and sudden shifts in demand. The fresher the data, the more trustworthy the analysis.

        Trust in Demand Forecasting Software

        Even with high-quality data, forecasters must still trust the analytical software that processes the data. This trust must extend to both the software itself and to the computational environment in which it functions.

        If forecasters used on-premises software, they must rely on their own IT departments to safeguard the data and keep it available for use. If they wish instead to exploit the power of cloud-based analytics, customers must trust their confidential information to their software vendors. Professional-level software, such as ours, justifies customers’ trust through SOC 2 certification. SOC 2 certification was developed by the American Institute of CPAs and defines criteria for managing customer data based on five “trust service principles”—security, availability, processing integrity, confidentiality, and privacy.

        What about the software itself? What is needed to make it trustworthy? The main criteria here are the correctness of algorithms and functional reliability. If the vendor has a professional program development process, there will be little chance that the software ends up computing the wrong numbers because of a programming error. And if the vendor has a rigorous quality assurance process, there will be little chance that the software will crash just when the forecaster is on deadline or must deal with a pop-up analysis for a special situation.

        Summary

        To be useful, forecasters and their forecasts must be trusted by decision-makers. That trust depends on characteristics of forecasters and their processes and communication. It also depends on the quality of the data and software used in creating the forecasts.

         

        Read the 1st part of this Blog “Who do you Trust” here: https://smartcorp.com/forecasting/the-role-of-trust-in-the-demand-forecasting-process-part-1-who/