Font Size
Operations IT Exploring Defect Containment Metrics in Agile

Exploring Defect Containment Metrics in Agile

While Design for Six Sigma (DFSS) and Agile software development seem to have different orientations, there is more linkage than meets the eye.

On one hand, DFSS and Agile appear to be at odds:

  • DFSS talks about “stages” and tollgates, which Agile eschews in favor of Lean “single piece” or small batch flow.
  • DFSS talks about “understanding requirements up front,” while Agile says “learn to live with…welcome even, changing requirements.”
  • DFSS puts structure around tools, roles and support infrastructure, while Agile may talk about teams reflecting on how to become more effective and changing their behavior accordingly.

But on the other hand, they share important goals:

  • Placing a high priority on understanding customer value and delivering it efficiently.
  • Seeing and removing wasted time, effort and resources.
  • Using facts and data for feedback to control and improve product and process results.

So which is it? As usual, things are not as simple as they may appear. As these alignments and contrasts play out, a strong net positive in the sharing of thinking and tools between these two bodies of experience becomes clear. Worthwhile, in particular, is to focus on a part of this puzzle that is the subject of some current discussion – defect containment metrics. It is easy to look at measures like phase containment effectiveness (PCE), trigger on the word “phase,” and believe that the whole notion has no meaning in Agile development. But looking more closely at the word “containment,” one can see there may be something worth retaining in any development model.

Efficiencies of Small Scale

When U.S. companies were using big batch “efficiencies of scale” (with attendant scrap and waste written off or ignored), companies like Toyota realized the only way they could compete was by playing a different game. Where GM saw inventory as an asset, Toyota saw it as a liability and worked to minimize and streamline the flow of work in process (WIP). In software, a welcome outcome of reduced and well-managed WIP should be a more focused and sane development staff. There is little disagreement that the more things anyone tries to work on at once, the more time that is wasted switching between tasks, the more defects that are created, and the fewer defects that are remove from the work. Reduced WIP, matched to the productivity of teams in each activity, offers a virtuous cycle that can be exploited – better throughput with fewer defects.

Figure 1: Single Piece Flow – A Simplified View in Software Context

Figure 1: Single Piece Flow – A Simplified View in Software Context

Single Piece Flow

An important part of the Lean view of a process is to trace a “single piece” (smallest meaningful unit of work) through the workflow. A waterfall process would, in the extreme case, batch all requirements work, then upon completion batch all design work, etc. Lean finds efficiencies in managing smaller work elements, letting the downstream pull (customer need and downstream activity readiness) dictate the flow. Managing a Lean system is less about tollgates for batch review and more about managing the contents of incremental work-product queues and the resources available to work on them.

Feedback Crucial at Any Scale

Agile does not mean hasty iteration on ad hoc prototypes, but rather, a smart efficiency gained by doing focused, high-quality work producing lasting elements of delivered system. Such a process makes appropriate use of feedback during each activity. A part of each activity is appropriate learning (the L-blocks in Figure 1), assessing things like the value, doneness, goodness and outstanding risk connected with the work-product at hand. That learning provides crucial feedback to that activity and others.

When mistakes, errors or defects are surfaced (Table 1), it is tempting to fix them and move on. Experience in any industry shows, though, that a little extra effort spent understanding these things about key defects can pay big returns in the successful control and improvement of the development and delivery process:

  • What was the nature or type of defect?
  • Where and how was it found (what kind of activity and review)?
  • Where (during what kind of activity) and why was it inserted?

Note that mistakes are okay – part of the natural learning that happens when work is being done. Errors are not all bad. They are the weakness in an activity’s work that a team can find and remove before handing a hidden problem downstream. Defects waste time and cost money. It is reasonable to want to know enough about where, when and why they are happening to be able to remove them and their point-of-origin causes.

Back to the Word ‘Containment’

Thus one comes back to the original point – the word “containment.” Lean activities are not “phases,” but they characterize the nature of the work enough to be useful tags for root cause analysis. Throwing away the notion of phase containment completely, it is only a little bit helpful to note that many of the defects in the third iteration originated in the first iteration. Retaining the idea, as “activity containment,” one could know that 30 percent of the defects that show up and slow down coding work are omissions connected with design activity. This provides a lot more information to shed light on the cause (and to eliminate the problem).

This discussion restates the old idea that a good process produces, in addition to its intended output, useful information about itself. Activity-tagged defect find-point and origin sourcing data carries diagnostic information that can be useful at any scale, in any kind of flow.

Table 1: Evolution of Common Defect-related Terms
Term Definition Notes
Mistake Something missing, wrong, unclear or extra that could have become a defect – found during an activity – before the work product is declared completed Not tracked


A mistake left in a completed work product, but found before bring handed off to another activity

A mistake left in a completed work product which was handed off to another activity

Tracked: Important in computing activity containment metrics
Released Defect A defect released to internal or external customers Tracked: Important in computing total containment effectiveness

Common Containment Metrics

Formula definitions for common containment metrics, where Errors = potential defects that are found during the phase that created them, and Defects = errors that escape to a subsequent development or delivery phase.

Total Containment Effectiveness (TCE):

TCE = [Defects (Pre-Release) + Errors] / [Defects (Pre-Release + Released) + Errors] x 100%

Phase Containment Effectiveness (PCE):

PCEphase = Errorsphase / [Errorsphase + Defectsphase] x 100%

Defect Containment Effectiveness (DCE):

DCEphase = Defectsphase / [Defectsphase + Defectsfound in downstream phases] x 100%

An Agile Activity-containment Business Case

So far, this exploration gives reason to advocate the spirit of phase containment, with the definitions and sense mapped to Agile activity flow as a useful feedback and improvement mechanism. The business impacts of improving “activity containment” can be played out a number of ways.

Figure 2 is an “industry reasonable” defect containment scorecard for a waterfall development process. It cascades defects inserted, according to their points of origin and find-points. Containment metrics and defect find and fix times are used to roll up a business view of the “defect repair cost” as a percent of project cost. The 40 percent number is not rare or inflated.

Figure 2: A Waterfall Process

Figure 2: A Waterfall Process

Figure 3: Agile Prospective Improvements in Defect Insertion, Containment and Fix Rate

Figure 3: Agile Prospective Improvements in Defect Insertion, Containment and Fix Rate

Figure 3 quantifies some impacts of prospective Agile development process improvements. Based on less WIP, improvements in clarity of thinking and communication could result in:

  • Inserted defects reduced by 10 percent (1,000 to 900).
  • Activities improving visibility to “contain” one in each three of the defects now escaping.
  • Defect find and fix times reduced by about 20 percent (for pre-release defects).

Then the business impact is that the defect repair cost (a key cost of poor quality) would be reduced to something like 17 percent and the actual costs significantly reduced.

Conclusion: Value in Agile and DFSS

Agile thinking brings a lot of potential value – worth understanding and quantifying. DFSS, while colored with the language of waterfalls and phases, brings important value that should not be lost because of the labels. “Containment” as a cost reduction and causal-tagging idea has some merits. In the end, “knowing where you are” and driving improvement in “containment metrics” to track financial impacts may very well support the Agile business case.

Register Now

  • Stop this in-your-face notice
  • Reserve your username
  • Follow people you like, learn from
  • Extend your profile
  • Gain reputation for your contributions
  • No annoying captchas across site
And much more! C'mon, register now.

Leave a Comment


Joe Smith

I am working on Agile metrics for my organization, and over time my thinking has become very close to yours. I have hypothesized that fix costs in Agile development are much lower than the fix costs in “big design up front” development, but I don’t have the data to back it up. Did you used real data to populate the Post-Release Loaded Defect Fix Costs in Figure 3?

This thinking also leads me to believe that while tracking a defect containment matrix may be valuable for a team doing Agile development, it may not be as valuable as a metric that focuses on improvement of product and process results over time such as David Anderson’s Initial Quality metric.


Login Form