One of the initial challenges of enhancing IT quality metrics is to manage the mountain of data produced at a typical organization. Those organizations that have not learned how to mine their existing information to find solutions tend to track their data via spreadsheets and then generate high-level reports for executive management. Often, major gaps exist in these reports because, typically, no one is asking the right questions. The purpose of analyzing data and metrics should be to identify weaknesses in a function with the intention of improving its process.

First, however, the desired outcomes and goals for the organization must be defined. Without clearly stated definitions, key performance indicators (KPI’s) cannot be set, resulting in no progress.

In 2008, the ING Direct IT development team – led by Jeff Chapman, head of technology, and Todd DeCapua, Quality Assurance (QA) and Process Group leader – chose to enhance the organization’s IT quality metrics using the Six Sigma methodology. Since the effort was implemented, the team has experienced a 25 percent yield increase from continuous improvements in 2009; a 25 percent increase in production quality from 2008 to 2009; and a 300 percent increase in throughput (linear) from 2007 to 2009.

Defining the Lay of the Land

The ING Direct IT QA and Process Group was tasked with doubling throughput from the previous year while maintaining quality and existing staffing levels. Parallel to the kickoff of the quality metrics enhancement project, several other improvements were planned within the organization’s IT development framework, along with a maturing eight-month Agile Scrum transformation. At the time, the processes and systems were all in a state of flux, and data capture and data sources were in various stages of maturity. This environment underscored the need to focus on clear performance targets and business requirements.

From the project start, the QA team communicated their challenges to the business. QA required strong supporting data, which was not fully transparent or readily available. The method in place for tracking the progress of testing was by counting completed test cases and defects. This was deemed to be an insufficient method to determine what factors needed to be adjusted for improved performance or to identify other potential pitfalls earlier in the process. Additionally, quality metrics for a product release did not exist; the defects that impacted customers after a release were the only indicators. Lastly, the cost impact of low-quality deliveries and the associated defect trends were unclear.

The first step was to leverage the existing industry research and engage the QA and process group leader to develop the vision for the long-term end use of these metrics. Once the vision of predictive analytic metrics was determined, the process of achieving it remained. Because the throughput and outputs were already being tracked, the initiative goals were focused on the implied and expected state of these outputs – i.e., their quality.

To begin the second step of the project, the Black Belt polled business leaders for their understanding of the definition and scope of “release quality.” By the end of the Define phase, the internal customers reached consensus. The top factors that affected their perception of release quality, the customers said, included:
• Code integrity
• Customer and operation impact of defects
• Date of delivery
• Quality of communication
• System ability to meet service level

These customer benchmarks served as the initial set of KPIs and the voice of the customer (VOC).

It was clear to the Six Sigma team that this project required a multi-generational approach to correspond with the implementation of the IT development framework. The immediate project, known as Generation 1, focused on developing baseline metrics. The tools in the framework – Quality Center, VersionOne and AnthillPro – would provide the infrastructure for the collection and tracking of release quality throughout the development cycle. A strategic dashboard vision was developed and promoted into a separate IT project.

Various stakeholders were teamed together to rewrite the definition of “defect priority and severity” as well as the definition of “quality.” Previous definitions were no longer relevant and needed to be revised to prevent inconsistent interpretations and application.

A list of “not-so-quick” wins was compiled, including the setting of quality targets for product releases. At this point, it became apparent that teams and individual contributors were still evolving in their use of the tools, and that their grasp of the Agile methodology was still developing. Consequently, the team decided to delay setting quality targets for three months until the practice matured.

Simultaneous with these other factors, the team clarified the f(x) =Y Six Sigma equation, where:

Y = Release quality
x = Factors defined by customers and stakeholders

Release quality would be assigned a sigma value. Coding and environmental defects, as well as feature and function defects, were defined as x’s. Staffing and process issues that were found to cause coding defects during the course of the project phases were set aside as a separate metric.

The count of test cases for a product release was used to tag opportunities in the yield calculation. The teamed also counted regression test cases, which included both manual and scripted tests and production defects that were discovered after product release. Cancelled defects were not counted. This decision required much deliberation because the team did not feel that lines of code, function points or story points were consistent gauges. The test cases were in closer alignment with defects and were directly aligned to the features and functions being delivered. For all intents and purposes, the test cases represented all the specific features and conditions of the code.

Baselining the Past

During the Measure phase, the team focused on inconsistencies in the counting and tracking of test cases within the quality center. The team recognized that quality ratings would be dependent upon the completeness, comprehensiveness and consistency of the test cases executed. Orthogonal arrays were identified for further design of experiments work, and the team made a conscious decision to keep this out of scope.

Defect stage containment metrics were measured but were adjusted to the Agile Scrum environment. Only two phases were identified: 1) the state of the code when it is delivered to QA (QA release quality), and 2) the state of the code after it was delivered to production (triage or post-production release quality). In practice, the post-production measure of quality was viewed as the more significant metric for determining data trends.

By the time the team reached the Measure phase, they had identified inconsistencies in the defect fix time. However, the Scrum Masters believed that the four-week sprint cycle created a three-week maximum boundary for fix time. Additionally, the determination that 63 percent of coding defects were severe in nature supported the implementation of new definitions for degrees of severity.

Another interesting finding revealed that almost 20 percent of the defects were cancelled defects. The work dedicated to these items represented a significant time commitment for both the QA and development teams. Many of the cancelled defects were found to be duplicates, and spurred the QA team to apply additional controls to test features and report defects. This information was also used to identify which QA analysts required more support and mentoring to improve their understanding of system functionality.

The Root of It All

After the root cause analysis (RCA), the team discovered that post-production defects often were not associated with a release or project. The user acceptance testing (UAT) was engaged very late in the sprint cycle, which caused delays and deferrals in the release of the features. Furthermore, the post-production defects were listed in five different systems and were not linked back to the original project or release.

Sigma metrics and test-case tracking methods were implemented by the sprint teams in the form of a scorecard. In addition to tracking their velocity and story status (not in the scope of this project), teams began tracking both team and release quality throughout the development and release cycle. The final sigma value for the release was set to the quality measure after triage (therefore, defect = total defects from development + defects post-production, relative to the total count of test cases).

The Six Sigma team implemented changes such as RCA for all post-production defects. RCA provided learning that was then fed back to all IT and business teams so they could continuously fine-tune their associated processes. To support this, the project’s executive sponsor approved plans to simplify tracking and oversight by consolidating the tracking of defects to a central location, while providing easily correlated trending. Incident, problem and release management were required to coordinate more closely during the post-production phase.

To determine the cost of a post-production defect, the team and their financial analyst put together a model that included the IT costs, business operations costs, sales costs and customer impact costs. The intention was to sample costs for a variety of defects that afflict the production segment and develop a cost index based on this sample (Figure 1).

Figure 1: Post-Production Defects Cost Benefit Model

UAT engagement tasks were specified and scheduled earlier in the sprint timeline (Figure 2). The team determined the cost associated with post-production defects and reinforced the fact that remediation costs were cheaper in the earlier stages of development.

Figure 2: UAT Involvement Start

Fast-forward to Quality

Today, after successfully fine-tuning the process, a continuous feedback of quality is provided through the push of quality metrics every night to all stakeholders. The pull is also available to all members in real time. This feeds into the discussion at each of the daily stand-ups for each Agile Scrum team, and the leadership S2 and S3 stand-ups. These stand-ups hold team members accountable for their committed goals while increasing collaboration and enabling them to identify gaps and process improvements.

Under this system, team members and leadership get a broader view of quality throughout the entire development process. An informed decision process enables them to make daily and hourly decisions that minimize impediments for the team. Ultimately, the metrics allow the teams to deliver the highest quality release to both internal and external customers (Figure 3).

Figure 3: Release Quality Over Time

Release managers track sigma values and use historical data to predict future performance. Recent tracking efforts indicate that sigma values in development are a predictor for release quality sigma.

As a result of the enhanced quality metrics from this project, the behavior of teams and individuals with regards to quality – and what they consider “done” – has shifted. Now, the entire development team, rather than just QA, drives quality. In addition, the project has provided both teams and leadership with a quality metrics dashboard that includes an easily recognizable indicator of the progress of daily process targets (red if the target is not met, green if it has been met), as well as the overall release progress and quality. This information is also leveraged within the sprint teams daily standup meetings as they look to identify opportunities to remove each other’s impediments.

This quality metrics dashboard has resulted in some friendly competition across teams and has also enabled teams to reallocate themselves throughout the sprint. Lastly, the sigma calculation provides measurement of the quality throughout the sprint, as well as from one release to the next. The quality over time enables the teams to show the return on investment (ROI) from the process, tools and people improvements implemented.

Following the Six Sigma process, every tier of IT development is now more keenly focused than before on optimization activities, such as team productivity, prioritization and resource management. Nearly all status reporting has been removed, as these data points are now pulled real-time from their integrated development framework. This change alone enables teams to better focus on the quality of the product and the speed of delivery of each new customer feature or function.

Significant work remains for the IT sprint teams. However, the opportunities identified by the team via their enhanced quality metrics should lead to ongoing improvements. The Six Sigma process has enabled the teams to improve the quality, throughput, speed and continuous improvement with which they deliver features and functionality to the customer. As a bonus, the results of the Six Sigma process, which remain long after the team departs, are a simple and visible dashboard for quality.

About the Author