Financial assessment of process improvement activities is the cornerstone to project selection and to benefit evaluation. Representatives from finance organizations, charged with such a task, may be confronted with assessing a large number of projects out of many different areas within their company. For a detailed benefit assessment, often the mechanics of the process in question and the needs of the customer must be well understood. Finance may simply lack the time to make the necessary assumptions and estimations or to coach the project team in doing so.
One possible way around this problem is to calculate the weighted risk of potential failures to estimate the cost of poor quality. This method extends the failure mode and effects analysis (FMEA) to financial assessments. For consistency checks, a complementary method would be helpful. In addition, in some cases an extended FMEA is not easily available. One evaluation method that can serve as an alternative to the extended FMEA is Taguchi’s Loss Function.
From a finance representative’s point of view, a financial evaluation method should be easy to communicate and outsource to project teams. The method is acceptable if it delivers plausible estimations. Even relative statements are valuable for decision making and project assessment. For instance, a statement might be: “Whatever the cost of poor quality of the process, we have cut it by 75 percent.” Taguchi’s Loss Function fits this criteria.
Understanding Taguchi’s Loss Function
Genichi Taguchi established a loss function to measure the financial impact of a process deviation from target. Ontarget processes incur the least overall loss. Any deviation from this minimum leads to increased loss in a quadratic manner (at least for small deviations).
The underlying approach can also be used for other types of loss functions. Taguchi’s concept contrasts with the “traditional” understanding of cost of poor quality (Figure 1). The latter states that any value within the specification window incurs the same loss. As Thomas Pyzdek argues in The Six Sigma Handbook, Revised and Expanded Edition (McGrawHill, 2003), this way of thinking destroys the concept of continuous improvement.
Whatever the loss function, the total cost incurred is the product of the cost of a given deviation and the likelihood of such a deviation, this summed up over all possible deviations. In other words: the total cost is the area under the product of the probability density function times the loss function.
Taking a Closer Look
With that understanding, a quadratic loss function and a Gaussian probability density function (PDF) can be modelled using an Excel spreadsheet (Figure 2).
For such a situation, the loss also can be computed numerically.
Here, pdf(x) is the probability density function and t(x) is the Taguchi loss function. For the integration, y substitutes x so that
Case Application: Improving Delivery Time
A Lean Six Sigma project is focused on reducing the delivery time of parts. Voice of the customer (VOC) analysis showed that:

Shipments lasting longer than six days were found unacceptable and would, if they occurred consistently, lead to loss of business.

Shipments lasting four days were marginally acceptable but put high strain on the business.

Shipments of two days were clear customer delighters and shorter shipment times could not be translated into an additional advantage to customers.
The team used the cumulative probability of the normal distribution to model the resulting Taguchi loss function. The following assumptions were made:
 97.7 percent of the maximum cost is reached at delivery time lasting six days
 50 percent of maximum cost is reached after four days
 2.3 percent of maximum cost is reached after two days
Therefore, the underlying normal distribution is centered at four days, with a spread of one day.
The probability distribution function for the delivery lead times before and after the project were determined from a sample of data and found to follow a lognormal distribution, where location = 1.62 before the project and 1.08 after the project and scale = 0.15 before the project and 0.22 after the project (Figure 3).
From the Excel chart where these calculations were made, practitioners found that the loss incurred after the project was 25 percent of the loss incurred before the project, which is the ratio of the two hatched areas in the two graphs in Figure 3. If the cost of poor quality of the process was known before the project took place, (from a weighted risk analysis of potential failures, for example), the savings from the project also could be determined in absolute terms. Without this knowledge, the statement is: “Whatever the cost incurred by the process before, it has been reduced by 75 percent through the project.”
ARE THE CHARTS IN FIG. 3 LABELED PROPERLY?
Well written article. Good application of TLF.
I will add that the project costs and efforts to reduce defects and reduce variation (increase process capability or process “sigma”) is an inverse TLF function – inverse quadratic where cost/effort is highest when capability is highest and lowest in tails – cost of poor quality drops as variation is reduced and the effort or cost to do so increases quadratically.
The intersection of both loss function and effort/cost curves gives the point of diminishing returns.
I have a challenge understanding how any accountant will put a $ value from a P&L improvement purely on variation reduction with no shift in mean or median results. IF the mean or median result shifts, then I have reached agreement with financial controllers on improvement for that change.
If the reply back is that variation causes problems further downstream in the process, then that metric must be a secondary metric and shown a shift in mean/median.
Michael,
Is it possible to get a copy of your Excel spreadsheet?
Thanks,
Ric