The top 3 reasons why your spreadsheet won’t work for optimizing reorder points on spare parts

We often encounter Excel-based reorder point planning methods.  In this post, we’ve detailed an approach that a customer used prior to proceeding with Smart.  We describe how their spreadsheet worked, the statistical approaches it relied on, the steps planners went through each planning cycle, and their stated motivations for using (and really liking) this internally developed spreadsheet.

Their monthly process consisted of updating a new month of actuals into the “reorder point sheet.”  An embedded formula recomputed the Reorder Point (ROP) and order-up-to (Max) level.  It worked like this:

  • ROP = LT Demand + Safety Stock
  • LT Demand = average daily demand x lead time days (assumed constant to keep things simple)
  • Safety Stock for long lead time parts = Standard deviation x 2.0
  • Safety Stock for short lead time parts = Standard deviation x 1.2
  • Max = ROP + supplier-dictated Minimum Order Quantity

Historical averages and standard deviations used 52-weeks of rolling history (i.e., the newest week replaced the oldest week each period).  The standard deviation of demand was computed using the “stdevp” function in Excel.

Every month, a new ROP was recomputed. Both the average demand and standard deviation were modified by the new week’s demand, which in turn updated the ROP.

The default ROP is always based on the above logic. However, planners would make changes under certain conditions:

1. Planners would increase the Min for inexpensive parts to reduce risk of taking an on-time delivery hit (OTD) on an inexpensive part.

2. The Excel sheet identified any part with a newly calculated ROP that was ± 20% different from the current ROP.

3. Planners reviewed parts that exceed the exception threshold, proposed changes, and got a manager to approve.

4. Planners reviewed items with OTD hits and increased the ROP based on their intuition. Planners continued to monitor those parts for several periods and lowered the ROP when they felt it is safe.

5. Once the ROP and Max quantity were determined, the file of revised results was sent to IT, who uploaded into their ERP.

6. The ERP system then managed daily replenishment and order management.

Objectively, this was perhaps an above-average approach to inventory management. For instance, some companies are unaware of the link between demand variability and safety stock requirements and rely on rule of methods or intuition exclusively.  However,  there are problems with their approach:

1. Manual data updates
The spreadsheets required manual updating. To recompute, multiple steps were required, each with their own dependency. First, a data dump needed to be run from the ERP system.  Second, a planner would need to open the spreadsheet and review it to make sure the data imported properly.  Third, they needed to review output to make sure it calculated as expected.  Fourth, manual steps were required to push the results back to the ERP system.

2. One Size Fits All Safety Stock
Or in this case, “one of two sizes fit all”. The choice of using 2x and 1.2x standard deviation for long and short lead time items respectively equates to service levels of 97.7% and 88.4%.    This is a big problem since it stands to reason that not every part in each group requires the same service level.  Some parts will have higher stock out pain than others and vice versa. Service levels should therefore be specified accordingly and be commensurate with the importance of the item.  We discovered that they were experiencing OTD hits on roughly 20% of their critical spare parts which necessitated manual overrides of the ROP.  The root cause was that on all short lead time items they they were planning for an 88.4% service level target. So, the best they could have gotten was to stock out 12% of the time even if “on plan.”   It would have been better to plan service level targets according to the importance of the part.

3. Safety stock is inaccurate.  The items being planned for this company are spare parts to support diagnostic equipment.  The demand on most of these parts is very intermittent and sporadic.  So, the choice of using an average to compute lead time demand wasn’t unreasonable if you accept the need for ignoring variability in lead times.  However, the reliance on a Normal distribution to determine the safety stock was a big mistake that resulted in inaccurate safety stocks.  The company stated that its service levels for long lead time items ran in the 90% range compared to their target of 97.7%, and that they made up the difference with expedites.  Achieved service levels for shorter lead time items were about 80%, despite being targeted for 88.4%.    They computed safety stock incorrectly because their demand isn’t “bell shaped” yet they picked safety stocks assuming they were.  This simplification results in missing service level targets, forcing the manual review of many items that then need to be manually “monitored for several periods” by a planner.  Wouldn’t it be better to make sure the reorder point met the exact service level you wanted from the start?  This would ensure you hit your service levels while minimizing unneeded manual intervention.

There is a fourth issue that didn’t make the list but is worth mentioning.  The spreadsheet was unable to track trend or seasonal patterns.  Historical averages ignore trend and seasonality, so the cumulative demand over lead time used in the ROP will be substantially less accurate for trending or seasonal parts. The planning team acknowledged this but didn’t feel it was a legitimate issue, reasoning that most of the demand was intermittent and didn’t have seasonality.  It is important for the model to pick up on trend and seasonality on intermittent data if it exists, but we didn’t find their data exhibited these patterns.  So, we agreed that this wasn’t an issue for them.  But as planning tempo increases to the point that demand is bucketed daily, even intermittent demand very often turns out to have day-of-week and sometimes week-of-month seasonality. If you don’t run at a higher frequency now, be aware that you may be forced to do so soon to keep up with more agile competition. At that point, spreadsheet-based processing will just not be able to keep up.

In conclusion, don’t use spreadsheets. They are not conducive to meaningful what-if analyses, they are too labor-intensive, and the underlying logic must be dumbed down to process quickly enough to be useful.  In short, go with purpose-built solutions. And make sure they run in the cloud.

 

Spare Parts Planning Software solutions

Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

 

 

White Paper: What you Need to know about Forecasting and Planning Service Parts

 

This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

 

    Why Spare Parts Tradeoff Curves are Mission-Critical for Parts Planning

    I’ll bet your maintenance and repair teams would be ok with incurring higher stock out risks one some spare parts if they knew that the inventory reduction savings would be used to spread out the inventory investment more effectively to other parts and boost overall service levels.

    I’ll double down that your Finance team, despite always being challenged with lowering costs, would support a healthy inventory increase if they could clearly see that the revenue benefits from increased uptime, fewer expedites, and service level improvements clearly outweighed the additional inventory costs and risk.

    A spare parts tradeoff curve will enable service parts planning teams to properly communicate the risks and costs of each inventory decision.  It is mission critical for parts planning and the only way to adjust stocking parameters proactively and accurately for each part.  Without it, planners, for all intents and purposes, are “planning” with blinders on because they won’t be able to communicate the true tradeoffs associated with stocking decisions.

    For example, if a proposed increase to the min/max levels of an important commodity group of service parts is recommended, how do you know whether the increase is too high or too low or just right?  How can you fine-tune the change for thousands of spares?  You won’t and you can’t.  Your inventory decision making will rely on reactive, gut feel, and broad-brush decisions causing service levels to suffer and inventory costs to balloon.

    So, what exactly is a spare parts tradeoff curve anyway?

    It’s a fact-based, numerically driven prediction that details how changes in stocking levels will influence inventory value, holding costs, and service levels.  For each unit change in inventory level there is a cost and a benefit.  The spare parts tradeoff curve identifies these costs and benefits across different stocking levels. It lets planners discover the stock level that best balances the costs and benefits for each individual item.

    Here are two simplified examples. In Figure 1, the spare parts tradeoff curve shows how the service level (probability of not stocking out) changes depending on the reorder level.  The higher the reorder level, the lower the stockout risk.  It is critical to know how much service you are gaining given the inventory investment.  Here you may be able to justify that an inventory increase from a reorder point of 35 to 45 is well worth the investment of 10 additional units of stock because service levels jumps from just under 70% to 90%, cutting your stockout risk for the spare part from 30% to 10%!

     

    Cost vs Service Levels for inventory planning

    Figure 1: Cost versus Service Level

     

    Size of Inventory vs Service Levels for MRO

    Figure 2: Service Level versus Size of Inventory

    In this example (Figure 2), the tradeoff curve exposes a common problem with spare parts inventory.  Often stock levels are so high that they generate negative returns.  After a certain stocking quantity, each additional unit of stock does not buy more benefit in the form of a higher service level.  Inventory decreases can be justified when it is clear the stock level is well past the point of diminishing returns. An accurate tradeoff curve will expose the point where it is no longer advantageous to add stock.

    By leveraging #probabilisticforecasting to drive parts planning, you can communicate these tradeoffs accurately, do so at scale across hundreds of thousands of parts, avoid bad inventory decisions, and balance service levels and costs.  At Smart Software, we specialize in helping spare parts planners, Directors of Materials Management, and financial executives managing MRO, spare parts, and aftermarket parts to understand and exploit these relationships.

     

    Spare Parts Planning Software solutions

    Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

    Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

     

     

    White Paper: What you Need to know about Forecasting and Planning Service Parts

     

    This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

     

      Spare Parts Planning Isn’t as Hard as You Think

      When managing service parts, you don’t know what will break and when because part failures are random and sudden. As a result, demand patterns are most often extremely intermittent and lack significant trend or seasonal structure. The number of part-by-location combinations is often in the hundreds of thousands, so it’s not feasible to manually review demand for individual parts. Nevertheless, it is much more straightforward to implement a planning and forecasting system to support spare parts planning than you might think.

      This conclusion is informed by hundreds of software implementations we’ve directed over the years. Customers managing spare parts and service parts (the latter for internal consumption/MRO), and to a lesser degree aftermarket parts (for resale to installed bases), have consistently implemented our parts planning software faster than their peers in manufacturing and distribution.

      The primary reason is the role in manufacturing and distribution of business knowledge about what might happen in the future. In a traditional B2B manufacturing and distribution environment, there are customers and sales and marketing teams selling to those customers. There are sales goals, revenue expectations, and budgets. This means there is a lot of business knowledge about what will be purchased, what will be promoted, whose opinions need to be accounted for. A complex planning loop is required. In contrast, when managing spare parts, you have a maintenance team that fixes equipment when it breaks. Though there are often maintenance schedules for guidance, what is needed beyond a standard list of consumable parts is often unknown until a maintenance person is on-site. In other words, there just isn’t the same sort of business knowledge available to parts planners when making stocking decisions.

      Yes, that is a disadvantage, but it also has an upside: there is no need to produce a period-by-period consensus demand forecast with all the work that requires. When planning spare parts, you can usually skip many steps required for a typical manufacturer, distributor, or retailer. These skippable steps include:  

      1. Building forecasts at different levels of the business, such as product family or region.
      2. Sharing the demand forecast with sales, marketing, and customers.
      3. Reviewing forecast overrides from sales, marketing, and customers.
      4. Agreeing on a consensus forecast that combines statistics and business knowledge.
      5. Measuring “forecast value add” to determine if overrides make the forecast more accurate.
      6. Adjusting the demand forecast for known future promotions.
      7. Accounting for cannibalization (i.e., if I sell more of product A, I’ll sell less of product B).

      Freed from a consensus-building process, spare parts planners and inventory managers can rely directly on their software to predict usage and the required stocking policies. If they have access to a field-proven solution that addresses intermittent demand, they can quickly “go live” with more accurate demand forecasts and estimates of reorder points, safety stocks, and order suggestions.  Their attention can be focused on getting accurate usage and supplier lead time data. The “political” part of the job can be limited to obtaining organization consensus on service level targets and inventory budgets.

      Spare Parts Planning Software solutions

      Smart IP&O’s service parts forecasting software uses a unique empirical probabilistic forecasting approach that is engineered for intermittent demand. For consumable spare parts, our patented and APICS award winning method rapidly generates tens of thousands of demand scenarios without relying on the assumptions about the nature of demand distributions implicit in traditional forecasting methods. The result is highly accurate estimates of safety stock, reorder points, and service levels, which leads to higher service levels and lower inventory costs. For repairable spare parts, Smart’s Repair and Return Module accurately simulates the processes of part breakdown and repair. It predicts downtime, service levels, and inventory costs associated with the current rotating spare parts pool. Planners will know how many spares to stock to achieve short- and long-term service level requirements and, in operational settings, whether to wait for repairs to be completed and returned to service or to purchase additional service spares from suppliers, avoiding unnecessary buying and equipment downtime.

      Contact us to learn more how this functionality has helped our customers in the MRO, Field Service, Utility, Mining, and Public Transportation sectors to optimize their inventory. You can also download the Whitepaper here.

       

       

      White Paper: What you Need to know about Forecasting and Planning Service Parts

       

      This paper describes Smart Software’s patented methodology for forecasting demand, safety stocks, and reorder points on items such as service parts and components with intermittent demand, and provides several examples of customer success.

       

        Service-Level-Driven Planning for Service Parts Businesses

        Service-Level-Driven Service Parts Planning is a four-step process that extends beyond simplified forecasting and rule-of-thumb safety stocks. It provides service parts planners with data-driven, risk-adjusted decision support.

        Step 1. Ensure that all stakeholders agree on the metrics that matter. All participants in the service parts inventory planning process must agree on the definitions and what metrics matter most to the organization. Service Levels detail the percentage of time you can completely satisfy required usage without stocking out. Fill Rates detail the percentage of the requested usage that is immediately filled from stock. (To learn more about the differences between service levels and fill rate, watch this 4-minute lesson here.) Availability details the percentage of active spare parts that have an on-hand inventory of at least one unit. Holding costs are the annualized costs of holding stock accounting for obsolescence, taxes, interest, warehousing, and other expenses. Shortage costs are the cost of running out of stock including vehicle/equipment down time, expedites, lost sales, and more. Ordering costs are the costs associated with placing and receiving replenishment orders.

        Step 2. Benchmark historical and predicted current service level performance. All participants in the service parts inventory planning process must hold a common understanding of predicted future service levels, fill rates and costs and their implications for your service parts operations. It is critical to measure both historical Key Performance Indicators (KPIs) and their predictive equivalents, Key Performance Predictions (KPPs). Leveraging modern software, you can benchmark past performance and leverage probabilistic forecasting methods to simulate future performance. By stress testing your current inventory stocking policies against all plausible scenarios of future demand, you will know ahead of time how current and proposed stocking policies are likely to perform.

        Step 3. Agree on targeted service levels for each spare part and take proactive corrective action when targets are predicted to miss. Parts planners, supply chain leadership, and the mechanical/maintenance teams should agree on the desired service level targets with a full understanding of the tradeoffs between stockout risk and inventory cost. By leveraging what-if scenarios in modern parts planning software, it is possible to compare alternative stocking policies and identify those that best meets business objectives. Agree on what degree of stockout risk is acceptable for each part or class of parts. Likewise, determine inventory budgets and other cost constraints. Once these limits are agreed, take immediate action to avoid stockouts and excess inventory before they occur. Use your software to automatically upload modified reorder points, safety stock levels, and/or Min/Max parameters to your Enterprise Resource Planning (ERP) or Enterprise Asset Management (EAM) system to adjust daily parts purchasing.

        Step 4. Make it so and keep it so. Empower the planning team with the knowledge and tools it needs to ensure that you strike agreed-upon balance between service levels and costs by driving your ordering process using optimized inputs (forecasts, reorder points, order quantities, safety stocks). Track your KPI’s and use your software to identify and address exceptions. Don’t let reorder points grow stale and outdated.  Recalibrate the stocking policies each planning cycle (at least once monthly) using up-to-date usage history, supplier lead times, and costs. Remember: Recalibration of your service parts inventory policy is preventive maintenance against both stockouts and excess stock.

        Implementing Demand Planning and Inventory Optimization Software with the Right Data

        Data verification and validation are essential to the success of the implementation of software that performs statistical analysis of data, like Smart IP&O.  This article describes the issue and serves as a practical guide to doing the job right, especially for the user of the new application.

        The less experience your organization has in validating historical transactions or item master attributes, the more likely it is there were problems or mistakes with data entry into the ERP that have so far gone unnoticed.  The garbage in, garbage out rule means you need to prioritize this step of the software onboarding process or risk delay and possible failure to generate ROI.

        Ultimately the best person to confirm data in your ERP is entered correctly is the person who knows the business and can assert, for example, “this part doesn’t belong to that product group.”  That’s usually the same person who will open and use Smart. Though a database administrator or IT support can also play a key role by being able to say, “This part was assigned to that product group last December by Jane Smith.” Ensuring data is correct may not be a regular part of your day job but can be broken down into manageable small tasks that a good project manager will allocate the time and resources to complete.

        The demand planning software vendor receiving the data also has a role.  They will confirm that the raw data was ingested without issue. The vendor can also identify abnormalities in the raw data files that point to the need for validation.  But relying on the software vendor to reassure you the data looks fine is not enough.  You don’t want to discover, after go-live, that you can’t trust the output because some of the data “doesn’t make sense.”

        Each step in the data flow needs verification and validation.  Verification means the data at one step is still the same after flowing to the next step.  Validation means the data is correct and usable for analysis

        The most common data flow looks like this:

        Implementing Demand Planning and Inventory Optimization Software with the Right Data set

        Less commonly, the first step between ERP master data and the interfacing files can sometimes be bypassed, where files are not used as an interface.  Instead, an API built by IT or the inventory optimization software vendor is responsible for data to be written directly from the ERP to the mirrored database in the cloud.  The vendor would work with IT to confirm the API is working as expected.  But the first validation step, even in that case, can still be performed.  After ingesting the data, the vendor can make the mirrored data available in files for the DBA/IT verification and business validation.

        The confirmation that the mirrored data in the cloud completes the flow into the application is the responsibility of the vendor of software as a service.  SaaS vendors continually test that the software works correctly between the front-end application their subscribers see and the back-end data in the cloud database. If the subscribers still think the data doesn’t make sense in the application even after validating the interfacing files before going live, that is an issue to raise with the vendor’s customer support.

        However the interfacing files are obtained, the largest part of verification and validation falls to the project manager and their team.  They must resource a test of the interfacing files to confirm:

        1. They match the data in the ERP. And that all and only the ERP data that was necessary to extract for use in the application was extracted.
        2. Nothing “jumps out” to the business as incorrect for each of the types of information in the data
        3. They are formatted as expected.

         

        DBA/IT Verification Tasks

        1. Test the extract:

        IT’s verification step can be done with various tools, comparing files, or importing files back to the database as temporary tables and joining them with the original data to confirm a match.  IT can depend on a query to pull the requested data into a file but that file can fail to match. The existence of delimiters or line returns within the data values can cause a file to be different than its original database table.  It is because the file relies heavily on delimiters and line returns to identify fields and records, while the table doesn’t rely on those characters to define its structure.

        1. No bad characters:

        Free form data entry fields in the ERP, such as product descriptions, can sometimes themselves contain line returns, tabs, commas, and/or double quotes that can affect the structure of the output file.  Line returns should not be allowed in values that will be extracted to a file.  Characters equal to the delimiter should be stripped during extract or else a different delimiter used.

        Tip: if commas are the file delimiter, numbers greater than 999 can’t be extracted with a comma. Use “1000” rather than “1,000”.

        1. Confirm the filters:

        The other way that query extracts can return unexpected results is if conditions on the query are entered incorrectly.  The simplest way to avoid mistaken “where clauses” is to not use them.  Extract all data and allow the vendor to filter out some records according to rules supplied by the business.  If this will produce extract files so large that too much computing time is spent on the data exchange, the DBA/IT team should meet with the business to confirm exactly what filters on the data can be applied to avoid exchanging records that are meaningless to the application.

        Tip: Bear in mind that Active/Inactive or item lifecycle information should not be used to filter out records.  This information should be sent to the application so it knows when an item becomes inactive.

        1. Be consistent:

        The extract process must produce files of consistent format every time it is executed.  File names, field names, and position, delimiter, and Excel sheet name if Excel is used, numeric formats and date formats, and the use of quotes around values should never differ from one execution of the extract one day to the next. A hands-off report or stored procedure should be prepared and used for every execution of the extract.

         

        Business Validation Background

        Below is a break down each of validation step into considerations, specifically in the case where the vendor has provided a template format for the interfacing files where each type of information is provided in its own file.  Files sent from your ERP to Smart are formatted for easy export from the ERP.  That sort of format makes the comparison back to the ERP a relatively simple job for IT, but it can be harder for the business to interpret.  Best practice is to manipulate the ERP data, either by using pivot tables or similar in a spreadsheet.  IT may assist by providing re-formatted data files for review by the business.

        To delve into the interfacing files, you’ll need to understand them.  The vendor will supply a precise template, but generally interfacing files consist into three types: catalog data, item attributes, and transactional data.

        • Catalog data contains identifiers and their attributes. Identifiers are typically for products, locations (which could be plants or warehouses), your customers, and your suppliers.
        • Item attributes contain information about products at locations that are needed for analysis on the product and location combination. Such as:
          • Current replenishment policy in the form of a Min and Max, Reorder Point, or Review Period and Order Up To value, or Safety Stock
          • Primary supplier assignment and nominal lead time and cost per unit from that supplier
          • Order quantity requirements such as minimum order quantity, manufacturing lot size, or order multiples
          • Active/Inactive status of the product/location combination or flags that identify its state in its lifecycle, such as pre-obsolete
          • Attributes for grouping or filtering, such as assigned buyer/planner or product category
          • Current inventory information like on hand, on order, and in transit quantities.
        • Transactional data contains references to identifiers along with dates and quantities. Such as quantity sold in a sales order of a product, at a location, for a customer, on a date.  Or quantity placed on purchase order of a product, into a location, from a supplier, on a date. Or quantity used in a work order of a component product at a location on a date.

         

        Validating Catalog Data

        Considering catalog data first, you may have catalog files similar to these examples:

        Implementing Demand Planning and Inventory Optimization Software 111

        Location Identifier Description Region Source Location  etc…
        Location1 First location North    
        Location2 Second location South Location1  
        Location3 Third location South Location1  
        …etc…        

         

        Customer Identifier Description SalesPerson Ship From Location  etc…
        Customer1 First customer Jane Location1  
        Customer2 Second customer Jane Location3  
        Customer3 Third customer Joe Location2  
        …etc…        

         

        Supplier Identifier Description Status Typical Lead                 Time Days  etc…
        Supplier1 First supplier Active 18  
        Supplier2 Second supplier Active 60  
        Supplier3 Third Supplier Active 5  
        …etc…        

         

        1: Check for a reasonable count of catalog records

        For each file of catalog data, open it in a spreadsheet tool like Google Sheets or MS Excel. Answer these questions:

        1. Is the record count in the ballpark? If you have about 50K products, there should not be only 10K rows in its file.
        2. If it’s a short file, maybe the Location file, you can confirm exactly that all expected Iidentifiers are in it.
        3. Filter by each attribute value and confirm again the count of records with that attribute value makes sense.

        2: Check the correctness of values in each attribute field

        Someone who knows what the products are and what the groups mean needs to take the time to confirm it is actually right, for all the attributes of all the catalog data.

        So, if your Product file contains the attributes as in the example above, you would filter for Status of Active, and check that all resulting products are actually active.  Then filter for Status of Inactive and check that all resulting products are actually inactive.  Then filter for the first Group value and confirm all resulting products are in that group.  Repeat for Group2 and Group3, etc.  Then repeat for every attribute in every file.

        It can help to do this validation with a comparison to an already existing and trusted report.  If you have another spreadsheet that shows products by Group for any reason, you can compare the interfacing files to it.  You may need to familiarize yourself with the VLOOKUP function that helps with spreadsheet comparison.

        Validating Item Attribute Data

        1: Check for a reasonable count of item records

        The item attribute data confirmation is similar to the catalog data.  Confirm the product/location combination count makes sense in total and for each of the unique item attributes, one by one. This is an example item data file:

        Implementing Demand Planning and Inventory Optimization Software 22

        2: Find and explain weird numbers in item file

        There tends to be many numerical values in the item attributes, so “weird” numbers merit review.  To validate data for a numerical attribute in any file, search for where the number is:

        • Missing entirely
        • Equal to zero
        • Less than zero
        • More than most others, or less than most others (sort by that column)
        • Not a number at all, when it should be

        A special consideration of files that are not catalog files is they may not show the descriptions of the products and locations, just their identifiers, which can be meaningless to you.  You can insert columns to hold the product and location descriptors that you are used to seeing and fill them into the spreadsheet to assist in your work.  The VLOOKUP function works for this as well.  Whether or not you have another report to compare the Items file to, you have the catalog files for Products and Location with show both the identifier and the description for each row.

        3: Spot check

        If you are frustrated to find that there are too many attribute values to manually check in a reasonable amount of time, spot checking is a solution. It can be done in a manner likely to pick up on any problems.  For each attribute, get a list of the unique values in each column.  You can copy a column into a new sheet, then use the Remove Duplicates function to see the list of possible values.   With it:

        1. Confirm that no attribute values are present that shouldn’t be.
        2. It can be harder to remember which attribute values are missing that should be there, so it can help to look at another source to remind you. For example, if Group1 through Group12 are present, you might check another source to remember if these are all the Groups possible.  Even if it is not required for the interfacing files for the application, it may be easy for IT to extract a list of all the possible Groups that are in your ERP which you can use for the validation exercise.  If you find extra or missing values that you don’t expect, bring an example of each to IT to investigate.
        3. Sort alphabetically and scan down to see if any two values are similar but slightly different, maybe only in punctuation, which could mean one record had the attribute data entered incorrectly.

        For each type of item, maybe one from each product group and/or location, check that all its attributes in every file are correct or at least pass a sanity check.  The more you can spot check from a broad range of items, the less likely you will have issues post go live.

         

        Validating Transactional Data

        Transactional files may all have a format similar to this:

        Implementing Demand Planning and Inventory Optimization Software 333

         

        1: Find and explain weird numbers in each transactional file

        These should be checked for “weird” numbers in the Quantity field.  Then you can proceed to:

        1. Filter for dates outside the range you expect or missing expected dates entirely.
        2. Find where Transaction identifiers and line numbers are missing. They shouldn’t be.
        3. If there is more than one record for a given Transaction ID and Transaction Line Number combination, is that a mistake? Put another way, should duplicate records have their quantities summed together or is that double counting?

        2: Sanity check summed quantities

        Do a sanity check by filtering to a particular product you’re familiar with, and filter to a relatable date range such as last month or last year, and sum the quantities.  Is that total amount what you expected for that product in that time frame?  If you have information on total usage out of a location, you can slice the data that way to sum the quantities and compare to what you expect.  Pivot tables come in handy for verification of transactional data.  With them, you can view the data like:

        Product Year Quantity Total
        Prod1 2022 9,034
        Prod1 2021 8,837
        etc    

         

        The products’ yearly total may be simple to sanity check if you know the products well.  Or you can VLOOKUP to add attributes, such as product group, and pivot on that to see a higher level that is more familiar:

        Product Group Year Quantity Total
        Group1 2022 22,091
        Group2 2021 17,494
        etc    

         

        3: Sanity check count of records

        It may help to display a count of transactions rather than a sum of the quantities, especially for purchase order data.  Such as:

        Product Year Number of POs
        Prod1 2022 4
        Prod1 2021 1
        etc    

         

        And/or the same summarization at a higher level, like:

        Product Group Year Number of POs
        Group1 2022 609
        Group2 2021 40
        etc    

         

        4: Spot checking

        Spot checking the correctness of a single transaction, for each type of item and each type of transaction, completes due diligence.  Pay special attention to what date is tied to the transaction, and whether it is right for the analysis.  Dates may be a creation date, like the date a customer placed an order with you, or a promise date, like the date you expected to deliver on the customer’s order at the time of creating it, or a fulfilment date, when you actually delivered on the order.  Sometimes a promise date gets modified days after creating the order if it can’t be met.  Make sure the date in use reflects actual demand by the customer for the product most closely.

        What to do about bad data 

        If the mis-entries are few or one-off, you can edit the ERP records by hand as they are found, cleaning up your catalog attributes, even after go-live with the application.  But if large swathes of attributes or transaction quantities are off, this can spur an internal project to re-enter data correctly and possibly to change or start to document the process that needs to be followed when new records are entered into your ERP.

        Care must be taken to avoid too long a delay in implementation of the SaaS application while waiting on clean attributes.  Break the work into chunks and use the application to analyze the clean data first so the data cleansing project occurs in parallel with getting value out of the new application.