For too many companies, a critical piece of data fact-finding ― the measurement of demand uncertainty ― is handled by simple but inaccurate rules of thumb.  For example, demand planners will often compute safety stock by a user-defined multiple of the forecast or historical average.  Or they may configure their ERP to order more when on hand inventory gets to 2 x the average demand over the lead time for important items and 1.5 x for less important ones. This is a huge mistake with costly consequences.

The choice of multiple ends up being a guessing game.  This is because no human being can compute exactly how much inventory to stock considering all the uncertainties.  Multiples of the average lead time demand are simple to use but you can never know whether the multiple used is too large or too small until it is too late.  And once you know, all the information has changed, so you must guess again and then wait and see how the latest guess turns out.  With each new day, you have new demand, new details on lead times, and the costs may have changed.  Yesterday’s guess, no more matter how educated is no longer relevant today.  Proper inventory planning should be void of inventory and forecast guesswork.  Decisions must be made with incomplete information but guessing is not the way to go.

Knowing how much to buffer requires a fact-based statistical analysis that can accurately answer questions such as:

  • How much extra stock is needed to improve service levels by 5%
  • What the hit to on-time delivery will be if inventory is reduced by 5%
  • What service level target is most profitable.
  • How will the stockout risk be impacted by the random lead times we face.

Intuition can’t answer these questions, doesn’t scale across thousands of parts, and is often wrong.  Data, probability math and modern software are much more effective. Winging it is not the path to sustained excellence.