Measures of Forecast Error in a Supply Chain

As mentioned earlier, every instance of demand has a random component. A good forecasting method should capture the systematic component of demand but not the random component. The random component manifests itself in the form of a forecast error. Forecast errors contain valu­able information and must be analyzed carefully for two reasons:

  1. Managers use error analysis to determine whether the current forecasting method is pre­dicting the systematic component of demand accurately. For example, if a forecasting method consistently produces a positive error, the forecasting method is overestimating the systematic component and should be corrected.
  2. All contingency plans must account for forecast error. Consider a mail-order company with two suppliers. The first is in the Far East and has a lead time of two months. The second is

local and can fill orders with one week’s notice. The local supplier is more expensive than the Far East supplier. The mail-order company wants to contract a certain amount of contin­gency capacity with the local supplier to be used if the demand exceeds the quantity the Far East supplier provides. The decision regarding the quantity of local capacity to contract is closely linked to the size of the forecast error with a two-month lead time.

As long as observed errors are within historical error estimates, firms can continue to use their current forecasting method. Finding an error that is well beyond historical estimates may indicate that the forecasting method in use is no longer appropriate or demand has fundamentally changed. If all of a firm’s forecasts tend to consistently over- or underestimate demand, this may be another signal that the firm should change its forecasting method.

As defined earlier, forecast error for Period t is given by Et, where the following holds:

Et = Ft – Dt

That is, the error in Period t is the difference between the forecast for Period t and the actual demand in Period t. It is important that a manager estimate the error of a forecast made at least as far in advance as the lead time required for the manager to take whatever action the forecast is to be used for. For example, if a forecast will be used to determine an order size and the supplier’s lead time is six months, a manager should estimate the error for a forecast made six months before demand arises. In a situation with a six-month lead time, there is no point in estimating errors for a forecast made one month in advance.

One measure of forecast error is the mean squared error (MSE), where the following holds (the denominator in Equation 7.21 can also have n – 1 instead of n):

The MSE can be related to the variance of the forecast error. In effect, we estimate that the random component of demand has a mean of 0 and a variance of MSE. The MSE penalizes large errors much more significantly than small errors because all errors are squared. Thus, if we select forecast methods by minimizing MSE, a method with a forecast error sequence of 10, 12, 9, and 9 will be preferred to a method with an error sequence of 1, 3, 2, and 20. Thus, it is a good idea to use the MSE to compare forecasting methods if the cost of a large error is much larger than the gains from very accurate forecasts. Using the MSE as a measure of error is appropriate when forecast error has a distribution that is symmetric about zero.

Define the absolute deviation in Period t, At, to be the absolute value of the error in Period t; that is,

Define the mean absolute deviation (MAD) to be the average of the absolute deviation over all periods, as expressed by

The MAD can be used to estimate the standard deviation of the random component assum­ing that the random component is normally distributed. In this case the standard deviation of the random component is

We then estimate that the mean of the random component is 0, and the standard deviation of the random component of demand is s. MAD is a better measure of error than MSE if the forecast error does not have a symmetric distribution. Even when the error distribution is sym­metric, MAD is an appropriate choice when selecting forecasting methods if the cost of a fore­cast error is proportional to the size of the error.

The mean absolute percentage error (MAPE) is the average absolute error as a percentage of demand and is given by

The MAPE is a good measure of forecast error when the underlying forecast has signifi­cant seasonality and demand varies considerably from one period to the next. Consider a sce­nario in which two methods are used to make quarterly forecasts for a product with seasonal demand that peaks in the third quarter. Method 1 returns forecast errors of 190, 200, 245, and 180; Method 2 returns forecast errors of 100, 120, 500, and 100 over four quarters. Method 1 has a lower MSE and MAD relative to Method 2 and would be preferred if either criterion was used. If demand is highly seasonal, however, and averages 1,000, 1,200, 4,800, and 1,100 in the four periods, Method 2 results in a MAPE of 9.9 percent, whereas Method 1 results in a much higher MAPE, 14.3 percent. In this instance, it can be argued that Method 2 should be preferred to Method 1.

When a forecast method stops reflecting the underlying demand pattern (for instance, if demand drops considerably as it did for the automotive industry in 2008-2009), the forecast errors are unlikely to be randomly distributed around 0. In general, one needs a method to track and control the forecasting method. One approach is to use the sum of forecast errors to evaluate the bias, where the following holds:

The bias will fluctuate around 0 if the error is truly random and not biased one way or the other. Ideally, if we plot all the errors, the slope of the best straight line passing through should be 0.

The tracking signal (TS) is the ratio of the bias and the MAD and is given as

If the TS at any period is outside the range ± 6, this is a signal that the forecast is biased and is either underforecasting (TS < -6) or overforecasting (TS > + 6). This may happen because the forecasting method is flawed or the underlying demand pattern has shifted. One instance in which a large negative TS will result occurs when demand has a growth trend and the manager is using a forecasting method such as moving average. Because trend is not included, the average of historical demand is always lower than future demand. The negative TS detects that the forecasting method consistently underestimates demand and alerts the manager.

The tracking signal may also get large when demand has suddenly dropped (as it did for many industries in 2009) or increased by a significant amount, making historical data less rele­vant. If demand has suddenly dropped, it makes sense to increase the weight on current data relative to older data when making forecasts. McClain (1981) recommends the “declining alpha” method when using exponential smoothing when the smoothing constant starts large (to give greater weight to recent data) but then decreases over time. If we are aiming for a long-term smoothing constant of a = 1 – p, a declining alpha approach would be to start with a0 = 1 and reset the smoothing constant as follows:

In the long term, the smoothing constant will converge to a = 1 – p with the forecasts becoming more stable over time.

Source: Chopra Sunil, Meindl Peter (2014), Supply Chain Management: Strategy, Planning, and Operation, Pearson; 6th edition.

1 thoughts on “Measures of Forecast Error in a Supply Chain

  1. marizon ilogert says:

    Wow that was unusual. I just wrote an incredibly long comment but after I clicked submit my comment didn’t appear. Grrrr… well I’m not writing all that over again. Anyhow, just wanted to say great blog!

Leave a Reply

Your email address will not be published. Required fields are marked *