How to interpret and manipulate forecast results with different forecast methods

Smart IP&O is powered by the SmartForecasts® forecasting engine that automatically selects the most appropriate method for each item.  Smart Forecast methods are listed below:

  • Simple Moving Average and Single Exponential Smoothing for flat, noisy data
  • Linear Moving Average and Double Exponential Smoothing for trending data
  • Winters Additive and Winters Multiplicative for seasonal and seasonal & trending data.

This blog explains how each model works using time plots of historical and forecast data.  It outlines how to go about choosing which model to use.   The examples below show the same history, in red, forecasted with each method, in dark green, compared to the Smart-chosen winning method, in light green.

 

Seasonality
If you want to force (or prevent) seasonality to show in the forecast, then choose Winters models.  Both methods require 2 full years of history.

`Winter’s multiplicative will determine the size of the peaks or valleys of seasonal effects based on a percentage difference from a trending average volume.  It is not a good fit for very low volume items due to division by zero when determining that percentage. Note in the image below that the large percentage drop in seasonal demand in the history is being projected to continue over the forecast horizon making it look like there isn’t any seasonal demand despite using a seasonal method.

 

Winter’s multiplicative Forecasting method software

Statistical forecast produced with Winter’s multiplicative method. 

 

Winter’s additive will determine the size of the peaks or valleys of seasonal effects based on a unit difference from the average volume.  It is not a good fit if there’s significant trend to the data.  Note in the image below that seasonality is now being forecasted based on the average unit change in seasonality. So, the forecast still clearly reflects the seasonal pattern despite the down trend in both the level and seasonal peaks/valleys.

Winter’s additive Forecasting method software

Statistical forecast produced with Winter’s additive method.

 

Trend

If you want to force (or prevent) trend up or down to show in the forecast, then restrict the chosen methods to (or remove the methods of) Linear Moving Average and Double Exponential Smoothing.

 Double exponential smoothing will pick up on a long-term trend.  It is not a good fit if there are few historical data points.

Double exponential smoothing Forecasting method software

Statistical forecast produced with Double Exponential Smoothing

 

Linear moving average will pick up on nearer term trends.  It is not a good fit for highly volatile data

Linear moving average Forecasting method software

 

Non-Trending and Non-Seasonal Data
If you want to force (or prevent) an average from showing in the forecast, then restrict the chosen methods to (or remove the methods of) Simple Moving Average and Single Exponential Smoothing.

Single exponential smoothing will weigh the most recent data more heavily and produce a flat-line forecast.  It is not a good fit for trending or seasonal data.

Single exponential smoothing Forecasting method software

Statistical forecast using Single Exponential Smoothing

Simple moving average will find an average for each period, sometimes appearing to wiggle, and better for longer-term averaging.  It is not a good fit for trending or seasonal data.

Simple moving average Forecasting method software

Statistical forecast using Simple Moving Average

 

 

 

What to do when a statistical forecast doesn’t make sense

Sometimes a statistical forecast just doesn’t make sense.  Every forecaster has been there.  They may double-check that the data was input correctly or review the model settings but are still left scratching their head over why the forecast looks very unlike the demand history.   When the occasional forecast doesn’t make sense, it can erode confidence in the entire statistical forecasting process.

This blog will help a layman understand what the Smart statistical models are and how they are chosen automatically.  It will address how that choice sometimes fails, how you can know if it did, and what you can do to ensure that the forecasts can always be justified.  It’s important to know to expect, and how to catch the exceptions so you can rely on your forecasting system.

 

How methods are chosen automatically

The criteria to automatically choose one statistical method out of a set is based on which method came closest to correctly predicting held-out history.  Earlier history is passed to each method and the result is compared to actuals to find the one that came closest overall.  That automatically chosen method is then fed all the history to produce the forecast. Check out this blog to learn more about the model selection https://smartcorp.com/uncategorized/statistical-forecasting-how-automatic-method-selection-works/

For most time series, this process can capture trends, seasonality, and average volume accurately. But sometimes a chosen method comes mathematically closest to predicting the held-out history but doesn’t project it forward in a way that makes sense.  That means the system selected method isn’t best and for some “hard to forecast”

 

Hard to forecast items

Hard to forecast items may have large, unpredictable spikes in demand, or typically no demand but random irregular blips, or unusual recent activity.  Noise in the data sometimes randomly wanders up or down, and the automated best-pick method might forecast a runaway trend or a grind into zero.  It will do worse than common sense and in a small percentage of any reasonably varied group of items.  So, you will need to identify these cases and respond by overriding the forecast or changing the forecast inputs.

 

How to find the exceptions

Best practice is to filter or sort the forecasted items to identify those where the sum of the forecast over the next year is significantly different than the corresponding history last year.  The forecast sum may be much lower than the history or vice versa.  Use supplied metrics to identify these items; then you can choose to apply overrides to the forecast or modify the forecast settings.

 

How to fix the exceptions

Often when the forecast seems odd, an averaging method, like Single Exponential Smoothing or even a simple average using Freestyle, will produce a more reasonable forecast.  If trend is possibly valid, you can remove only seasonal methods to avoid a falsely seasonal result.  Or do the opposite and use only seasonal methods if seasonality is expected but wasn’t projected in the default forecast.  You can use the what-if features to create any number of forecasts, evaluate & compare, and continue to fine tune the settings until you are comfortable with the forecast.

Cleaning the history, with or without changing the automatic method selection, is also effective at producing reasonable forecasts. You can embed forecast parameters to reduce the amount of history used to forecast those items or the number of periods passed into the algorithm so earlier, outdated history is no longer considered.  You can edit spikes or drops in the demand history that are known anomalies so they don’t influence the outcome.  You can also work with the Smart team to implement automatic outlier detection and removal so that data prior to being forecasted is already cleansed of these anomalies.

If the demand is truly intermittent, it is going to be nearly impossible to forecast “accurately” per period. If a level-loading average is not acceptable, handling the item by setting inventory policy with a lead time forecast can be effective.  Alternatively, you may choose to use “same as last year” models which while not prone to accuracy will be generally accepted by the business given the alternatives forecasts.

Finally, if the item was introduced so recently that the algorithms do not have enough input to accurately forecast, a simple average or manual forecast may be best.  You can identify new items by filtering on the number of historical periods.

 

Manual selection of methods

Once you have identified rows where the forecast doesn’t make sense to the human eye, you can choose a smaller subset of all methods to allow into the forecast run and compare to history.  Smart will allow you to use a restricted set of methods just for one forecast run or embed the restricted set to use for all forecast runs going forward. Different methods will project the history into the future in different ways.  Having a sense of how each works will help you choose which to allow.

 

Rely on your forecasting tool

The more you use Smart period over period to embed your decisions about how to forecast and what historical data to consider, the less often you will face exceptions as described in this blog.  Entering forecast parameters is a manageable task when starting with critical or high impact items.  Even if you don’t embed any manual decisions on forecast methods, the forecast re-runs every period with new data. So, an item with an odd result today can become easily forecastable in time.

 

 

The Role of Trust in the Demand Forecasting Process Part 2: What do you Trust

“Regardless of how much effort is poured into training forecasters and developing elaborate forecast support systems, decision-makers will either modify or discard the predictions if they do not trust them.”  — Dilek Onkal, International Journal of Forecasting 38:3 (July-September 2022), p.802.

The words quoted above grabbed my attention and prompted this post. Those of a geekly persuasion, like your blogger, are inclined to think of forecasting as a statistical problem. While that is obviously true, those of a certain age, like your blogger, understand that forecasting is also a social activity and therefore has a large human component.

What Do You Trust?

There is a related dimension of trust: not who do you trust but what do you trust? By this, I mean both data and software.

Trust in Data

Trust in data underpins trust in the forecaster using the data. Most of our customers have their data in an ERP system. This data must be understood as a key corporate asset. For the data to be trustworthy, it must have the “three C’s”, i.e., it must be correct, complete, and current.

Correctness is obviously fundamental. We once had a customer who was implementing a new, strong forecasting process, but found the results completely at odds with their sense of what was happening in the business. It turned out that several of their data streams were incorrect by a factor of two, which is a huge error. Of course, this set back the implementation process until they could identify and correct all the gross errors in their demand data.

There is a less obvious point to be made about correctness. That is, data are random, so what you see now is not likely to be what you see next. Planning production based on the assumption that next week’s demand will be exactly the same as this week’s demand is clearly foolish, but classical formula-based forecasting models like the exponential smoothing mentioned above will project the same number throughout the forecast horizon. This is where scenario-based planning is essential for coping with the inevitable fluctuations in key variables such as customers’ demands and suppliers’ replenishment lead times.

Completeness is the second requirement for data to be trusted. Our software ultimately gets much of its value from exposing the links between operational decisions (e.g., selecting the reorder points governing replenishment of stock) and business-related metrics like inventory costs. Yet often implementation of forecasting software is delayed because item demand information is available someplace, but holding, ordering and/or shortage costs are not.  Or, to cite another recent example, a customer was able to properly size only half their inventory of spares for reparable parts because nobody had been tracking when the other half was breaking down, meaning there was no information on mean time before failure (MTBF), meaning it was not possible to model the breakdown behavior of half the fleet of reparable spares.

Finally, the currency of data matters. As the speed of business increases and company planning cycles drop from a quarterly or monthly tempo to a weekly or daily tempo, it becomes desirable to exploit the agility provided by overnight uploads of daily transactional data into the cloud. This allows high-frequency adjustments of forecasts and/or inventory control parameters for items that experience high volatility and sudden shifts in demand. The fresher the data, the more trustworthy the analysis.

Trust in Demand Forecasting Software

Even with high-quality data, forecasters must still trust the analytical software that processes the data. This trust must extend to both the software itself and to the computational environment in which it functions.

If forecasters used on-premises software, they must rely on their own IT departments to safeguard the data and keep it available for use. If they wish instead to exploit the power of cloud-based analytics, customers must trust their confidential information to their software vendors. Professional-level software, such as ours, justifies customers’ trust through SOC 2 certification. SOC 2 certification was developed by the American Institute of CPAs and defines criteria for managing customer data based on five “trust service principles”—security, availability, processing integrity, confidentiality, and privacy.

What about the software itself? What is needed to make it trustworthy? The main criteria here are the correctness of algorithms and functional reliability. If the vendor has a professional program development process, there will be little chance that the software ends up computing the wrong numbers because of a programming error. And if the vendor has a rigorous quality assurance process, there will be little chance that the software will crash just when the forecaster is on deadline or must deal with a pop-up analysis for a special situation.

Summary

To be useful, forecasters and their forecasts must be trusted by decision-makers. That trust depends on characteristics of forecasters and their processes and communication. It also depends on the quality of the data and software used in creating the forecasts.

 

Read the 1st part of this Blog “Who do you Trust” here: https://smartcorp.com/forecasting/the-role-of-trust-in-the-demand-forecasting-process-part-1-who/

 

 

 

The Role of Trust in the Demand Forecasting Process Part 1: Who do you Trust

 

“Regardless of how much effort is poured into training forecasters and developing elaborate forecast support systems, decision-makers will either modify or discard the predictions if they do not trust them.”  — Dilek Onkal, International Journal of Forecasting 38:3 (July-September 2022), p.802.

The words quoted above grabbed my attention and prompted this post. Those of a geekly persuasion, like your blogger, are inclined to think of forecasting as a statistical problem. While that is obviously true, those of a certain age, like your blogger, understand that forecasting is also a social activity and therefore has a large human component.

Who Do You Trust?

Trust is always a two-way street, but let’s stay on the demand forecaster’s side. What characteristics of and actions by forecasters and demand planners build trust in their work? The above quoted Professor Onkal reviewed academic research on this topic going back to 2006. She summarized results from practitioner surveys that identified key trust factors related to forecaster characteristics, forecasting process, and forecasting communication.

Forecaster characteristics

Key to building trust among the users of forecasts are perceptions of forecaster and demand planner competence and objectivity. Competence has a mathematical component, but many managers confuse computer skills with analytic skills, so users of forecasting software can usually clear this hurdle. However, since the two are not the same, it pays dividends to absorb your vendor’s training and learn not just the math but the lingo of your forecasting software. In my observation, trust can also be increased by showing knowledge of the company’s business.

Objectivity is also a key to trustworthiness. It may be uncomfortable for the forecaster to be put in the middle of occasional departmental squabbles, but those will come up and must be handled with tact. Squabbles? Well, silos exist and tilt in different directions. Sales departments favor higher demand forecasts that drive production increases, so that they never have to say “Sorry, we are fresh out of that.” Inventory managers are wary of high demand forecasts, because “excess enthusiasm” can leave them holding the bag, sitting on bloated inventory.

Sometimes the forecaster becomes a de facto referee, and in this role must display overt signs of objectivity. That can mean first recognizing that every management decision involves tradeoffs of good things against other good things, e.g., product availability versus lean operations, and then helping the parties strike a painful but tolerable balance by surfacing the links between operational decisions and the key performance metrics that matter to folks like Chief Financial Officers.

The Forecasting process

The forecasting process can be thought of as having three phases: data inputs, calculations, and outputs. Actions can be taken to increase trust in each phase.

 

Regarding inputs:

Trust can be increased if obviously relevant inputs are at least acknowledged if not directly used in calculations. Thus, factors like social media sentiment and regional sales managers’ gut instincts can be legitimate parts of a forecast consensus process. However, objectivity requires that these putative predictors of profit be tested objectively. For instance, a professional-grade forecasting process may well include subjective adjustment to statistical forecasts but must then also assess whether the adjustments actually end up improving accuracy, not just making some people feel listened to.

Regarding the second phase, calculations:

The forecaster will be trusted to the extent that they are able to deploy more than one way to calculate forecasts and then articulate a good reason why they chose the method eventually used. In addition, the forecaster should be able to explain in accessible language how even complicated techniques do their job. It is difficult to put trust in a “black box” method that is so opaque as to be inscrutable. The importance of explainability is amplified by the fact of life that the forecaster’s superior must themselves in turn be able to justify the choice of technique to their supervisor.

For instance, exponential smoothing uses this equation: S(t) = αX(t)+(1-α)S(t-1). Many forecasters are familiar with this equation, but many forecast users are not. There is a story that explains the equation in terms of averaging irrelevant “noise” in an item’s demand history and the need to strike a balance between smoothing out noise and being able to react to sudden shifts in the level of demand. The forecaster who can tell that story will be more credible. (My own version of that story uses phrases from sports, i.e., “head fakes” and “jukes”. Finding folksy analogs appropriate to your specific audience always pays dividends.)

A final point: best practice demands that any forecast be accompanied by an honest assessment of its uncertainty. A forecaster who tries to build trust by being overly specific (“Sales next quarter will be 12,184 units”) will always fail. A forecaster who says “Sales next quarter will have a 90% chance of falling between 12,000 and 12,300 units” will be both correct more often and  also more helpful to decision makers. After all, forecasting is essentially a job of risk management, so the decision maker is best served by knowing the risks.

Forecasting communication:

Finally, consider the third phase, communication of forecast results. Research suggests that continual communication with forecast users builds trust. It avoids those horrible, deflating moments when a nicely formatted report is shot down because of some fatal flaw that could have been foreseen: “This is no good because you didn’t take account of X, Y or Z” or “We really wanted you to present results rolled up to the top of the product hierarchies (or by sales region or by product line or…)”.

Even when everybody is aligned as to what is expected, trust is enhanced by presenting results using well-crafted graphics, with massive numerical tables provided for backup but not as the main way of communicating results. My experience has been that, just as a meeting-control device, a graph is usually much better than a large numerical table. With a graph, everybody’s attention is focused on the same thing and many aspects of the analysis are immediately (and literally) visible. With a table of results, the table of participants often splinters into side conversations in which each voice is focused on different pieces of the table.

Onkal summarizes the research this way: “Take-aways for those who make forecasts and those who use them converge around clarity of communication as well as perceptions of competence and integrity.”

What Do You Trust?

There is a related dimension of trust: not who do you trust but what do you trust? By this I mean both data and software….  Read the 2nd part of this Blog “What do you Trust” here  https://smartcorp.com/forecasting/the-role-of-trust-in-the-demand-forecasting-process-part-2-what/

 

 

 

 

Implementing Demand Planning and Inventory Optimization Software with the Right Data

Data verification and validation are essential to the success of the implementation of software that performs statistical analysis of data, like Smart IP&O.  This article describes the issue and serves as a practical guide to doing the job right, especially for the user of the new application.

The less experience your organization has in validating historical transactions or item master attributes, the more likely it is there were problems or mistakes with data entry into the ERP that have so far gone unnoticed.  The garbage in, garbage out rule means you need to prioritize this step of the software onboarding process or risk delay and possible failure to generate ROI.

Ultimately the best person to confirm data in your ERP is entered correctly is the person who knows the business and can assert, for example, “this part doesn’t belong to that product group.”  That’s usually the same person who will open and use Smart. Though a database administrator or IT support can also play a key role by being able to say, “This part was assigned to that product group last December by Jane Smith.” Ensuring data is correct may not be a regular part of your day job but can be broken down into manageable small tasks that a good project manager will allocate the time and resources to complete.

The demand planning software vendor receiving the data also has a role.  They will confirm that the raw data was ingested without issue. The vendor can also identify abnormalities in the raw data files that point to the need for validation.  But relying on the software vendor to reassure you the data looks fine is not enough.  You don’t want to discover, after go-live, that you can’t trust the output because some of the data “doesn’t make sense.”

Each step in the data flow needs verification and validation.  Verification means the data at one step is still the same after flowing to the next step.  Validation means the data is correct and usable for analysis

The most common data flow looks like this:

Implementing Demand Planning and Inventory Optimization Software with the Right Data set

Less commonly, the first step between ERP master data and the interfacing files can sometimes be bypassed, where files are not used as an interface.  Instead, an API built by IT or the inventory optimization software vendor is responsible for data to be written directly from the ERP to the mirrored database in the cloud.  The vendor would work with IT to confirm the API is working as expected.  But the first validation step, even in that case, can still be performed.  After ingesting the data, the vendor can make the mirrored data available in files for the DBA/IT verification and business validation.

The confirmation that the mirrored data in the cloud completes the flow into the application is the responsibility of the vendor of software as a service.  SaaS vendors continually test that the software works correctly between the front-end application their subscribers see and the back-end data in the cloud database. If the subscribers still think the data doesn’t make sense in the application even after validating the interfacing files before going live, that is an issue to raise with the vendor’s customer support.

However the interfacing files are obtained, the largest part of verification and validation falls to the project manager and their team.  They must resource a test of the interfacing files to confirm:

  1. They match the data in the ERP. And that all and only the ERP data that was necessary to extract for use in the application was extracted.
  2. Nothing “jumps out” to the business as incorrect for each of the types of information in the data
  3. They are formatted as expected.

 

DBA/IT Verification Tasks

  1. Test the extract:

IT’s verification step can be done with various tools, comparing files, or importing files back to the database as temporary tables and joining them with the original data to confirm a match.  IT can depend on a query to pull the requested data into a file but that file can fail to match. The existence of delimiters or line returns within the data values can cause a file to be different than its original database table.  It is because the file relies heavily on delimiters and line returns to identify fields and records, while the table doesn’t rely on those characters to define its structure.

  1. No bad characters:

Free form data entry fields in the ERP, such as product descriptions, can sometimes themselves contain line returns, tabs, commas, and/or double quotes that can affect the structure of the output file.  Line returns should not be allowed in values that will be extracted to a file.  Characters equal to the delimiter should be stripped during extract or else a different delimiter used.

Tip: if commas are the file delimiter, numbers greater than 999 can’t be extracted with a comma. Use “1000” rather than “1,000”.

  1. Confirm the filters:

The other way that query extracts can return unexpected results is if conditions on the query are entered incorrectly.  The simplest way to avoid mistaken “where clauses” is to not use them.  Extract all data and allow the vendor to filter out some records according to rules supplied by the business.  If this will produce extract files so large that too much computing time is spent on the data exchange, the DBA/IT team should meet with the business to confirm exactly what filters on the data can be applied to avoid exchanging records that are meaningless to the application.

Tip: Bear in mind that Active/Inactive or item lifecycle information should not be used to filter out records.  This information should be sent to the application so it knows when an item becomes inactive.

  1. Be consistent:

The extract process must produce files of consistent format every time it is executed.  File names, field names, and position, delimiter, and Excel sheet name if Excel is used, numeric formats and date formats, and the use of quotes around values should never differ from one execution of the extract one day to the next. A hands-off report or stored procedure should be prepared and used for every execution of the extract.

 

Business Validation Background

Below is a break down each of validation step into considerations, specifically in the case where the vendor has provided a template format for the interfacing files where each type of information is provided in its own file.  Files sent from your ERP to Smart are formatted for easy export from the ERP.  That sort of format makes the comparison back to the ERP a relatively simple job for IT, but it can be harder for the business to interpret.  Best practice is to manipulate the ERP data, either by using pivot tables or similar in a spreadsheet.  IT may assist by providing re-formatted data files for review by the business.

To delve into the interfacing files, you’ll need to understand them.  The vendor will supply a precise template, but generally interfacing files consist into three types: catalog data, item attributes, and transactional data.

  • Catalog data contains identifiers and their attributes. Identifiers are typically for products, locations (which could be plants or warehouses), your customers, and your suppliers.
  • Item attributes contain information about products at locations that are needed for analysis on the product and location combination. Such as:
    • Current replenishment policy in the form of a Min and Max, Reorder Point, or Review Period and Order Up To value, or Safety Stock
    • Primary supplier assignment and nominal lead time and cost per unit from that supplier
    • Order quantity requirements such as minimum order quantity, manufacturing lot size, or order multiples
    • Active/Inactive status of the product/location combination or flags that identify its state in its lifecycle, such as pre-obsolete
    • Attributes for grouping or filtering, such as assigned buyer/planner or product category
    • Current inventory information like on hand, on order, and in transit quantities.
  • Transactional data contains references to identifiers along with dates and quantities. Such as quantity sold in a sales order of a product, at a location, for a customer, on a date.  Or quantity placed on purchase order of a product, into a location, from a supplier, on a date. Or quantity used in a work order of a component product at a location on a date.

 

Validating Catalog Data

Considering catalog data first, you may have catalog files similar to these examples:

Implementing Demand Planning and Inventory Optimization Software 111

Location Identifier Description Region Source Location  etc…
Location1 First location North    
Location2 Second location South Location1  
Location3 Third location South Location1  
…etc…        

 

Customer Identifier Description SalesPerson Ship From Location  etc…
Customer1 First customer Jane Location1  
Customer2 Second customer Jane Location3  
Customer3 Third customer Joe Location2  
…etc…        

 

Supplier Identifier Description Status Typical Lead                 Time Days  etc…
Supplier1 First supplier Active 18  
Supplier2 Second supplier Active 60  
Supplier3 Third Supplier Active 5  
…etc…        

 

1: Check for a reasonable count of catalog records

For each file of catalog data, open it in a spreadsheet tool like Google Sheets or MS Excel. Answer these questions:

  1. Is the record count in the ballpark? If you have about 50K products, there should not be only 10K rows in its file.
  2. If it’s a short file, maybe the Location file, you can confirm exactly that all expected Iidentifiers are in it.
  3. Filter by each attribute value and confirm again the count of records with that attribute value makes sense.

2: Check the correctness of values in each attribute field

Someone who knows what the products are and what the groups mean needs to take the time to confirm it is actually right, for all the attributes of all the catalog data.

So, if your Product file contains the attributes as in the example above, you would filter for Status of Active, and check that all resulting products are actually active.  Then filter for Status of Inactive and check that all resulting products are actually inactive.  Then filter for the first Group value and confirm all resulting products are in that group.  Repeat for Group2 and Group3, etc.  Then repeat for every attribute in every file.

It can help to do this validation with a comparison to an already existing and trusted report.  If you have another spreadsheet that shows products by Group for any reason, you can compare the interfacing files to it.  You may need to familiarize yourself with the VLOOKUP function that helps with spreadsheet comparison.

Validating Item Attribute Data

1: Check for a reasonable count of item records

The item attribute data confirmation is similar to the catalog data.  Confirm the product/location combination count makes sense in total and for each of the unique item attributes, one by one. This is an example item data file:

Implementing Demand Planning and Inventory Optimization Software 22

2: Find and explain weird numbers in item file

There tends to be many numerical values in the item attributes, so “weird” numbers merit review.  To validate data for a numerical attribute in any file, search for where the number is:

  • Missing entirely
  • Equal to zero
  • Less than zero
  • More than most others, or less than most others (sort by that column)
  • Not a number at all, when it should be

A special consideration of files that are not catalog files is they may not show the descriptions of the products and locations, just their identifiers, which can be meaningless to you.  You can insert columns to hold the product and location descriptors that you are used to seeing and fill them into the spreadsheet to assist in your work.  The VLOOKUP function works for this as well.  Whether or not you have another report to compare the Items file to, you have the catalog files for Products and Location with show both the identifier and the description for each row.

3: Spot check

If you are frustrated to find that there are too many attribute values to manually check in a reasonable amount of time, spot checking is a solution. It can be done in a manner likely to pick up on any problems.  For each attribute, get a list of the unique values in each column.  You can copy a column into a new sheet, then use the Remove Duplicates function to see the list of possible values.   With it:

  1. Confirm that no attribute values are present that shouldn’t be.
  2. It can be harder to remember which attribute values are missing that should be there, so it can help to look at another source to remind you. For example, if Group1 through Group12 are present, you might check another source to remember if these are all the Groups possible.  Even if it is not required for the interfacing files for the application, it may be easy for IT to extract a list of all the possible Groups that are in your ERP which you can use for the validation exercise.  If you find extra or missing values that you don’t expect, bring an example of each to IT to investigate.
  3. Sort alphabetically and scan down to see if any two values are similar but slightly different, maybe only in punctuation, which could mean one record had the attribute data entered incorrectly.

For each type of item, maybe one from each product group and/or location, check that all its attributes in every file are correct or at least pass a sanity check.  The more you can spot check from a broad range of items, the less likely you will have issues post go live.

 

Validating Transactional Data

Transactional files may all have a format similar to this:

Implementing Demand Planning and Inventory Optimization Software 333

 

1: Find and explain weird numbers in each transactional file

These should be checked for “weird” numbers in the Quantity field.  Then you can proceed to:

  1. Filter for dates outside the range you expect or missing expected dates entirely.
  2. Find where Transaction identifiers and line numbers are missing. They shouldn’t be.
  3. If there is more than one record for a given Transaction ID and Transaction Line Number combination, is that a mistake? Put another way, should duplicate records have their quantities summed together or is that double counting?

2: Sanity check summed quantities

Do a sanity check by filtering to a particular product you’re familiar with, and filter to a relatable date range such as last month or last year, and sum the quantities.  Is that total amount what you expected for that product in that time frame?  If you have information on total usage out of a location, you can slice the data that way to sum the quantities and compare to what you expect.  Pivot tables come in handy for verification of transactional data.  With them, you can view the data like:

Product Year Quantity Total
Prod1 2022 9,034
Prod1 2021 8,837
etc    

 

The products’ yearly total may be simple to sanity check if you know the products well.  Or you can VLOOKUP to add attributes, such as product group, and pivot on that to see a higher level that is more familiar:

Product Group Year Quantity Total
Group1 2022 22,091
Group2 2021 17,494
etc    

 

3: Sanity check count of records

It may help to display a count of transactions rather than a sum of the quantities, especially for purchase order data.  Such as:

Product Year Number of POs
Prod1 2022 4
Prod1 2021 1
etc    

 

And/or the same summarization at a higher level, like:

Product Group Year Number of POs
Group1 2022 609
Group2 2021 40
etc    

 

4: Spot checking

Spot checking the correctness of a single transaction, for each type of item and each type of transaction, completes due diligence.  Pay special attention to what date is tied to the transaction, and whether it is right for the analysis.  Dates may be a creation date, like the date a customer placed an order with you, or a promise date, like the date you expected to deliver on the customer’s order at the time of creating it, or a fulfilment date, when you actually delivered on the order.  Sometimes a promise date gets modified days after creating the order if it can’t be met.  Make sure the date in use reflects actual demand by the customer for the product most closely.

What to do about bad data 

If the mis-entries are few or one-off, you can edit the ERP records by hand as they are found, cleaning up your catalog attributes, even after go-live with the application.  But if large swathes of attributes or transaction quantities are off, this can spur an internal project to re-enter data correctly and possibly to change or start to document the process that needs to be followed when new records are entered into your ERP.

Care must be taken to avoid too long a delay in implementation of the SaaS application while waiting on clean attributes.  Break the work into chunks and use the application to analyze the clean data first so the data cleansing project occurs in parallel with getting value out of the new application.