“As technology moves more and more into the very fabric of our existence, the real-time existence of consumers and businesses and the economy, the reliability of systems will need to reach the level of dial-tone. Consequently, the techniques used to develop systems, the quality of these systems, and the demands on the performance of these systems will all need to be higher than anything we’ve ever imagined. This, in turn, means that company stock prices will start to be impacted by project slippage. Project slippage is going to start taking out companies.”
That is the view of the noted IT benchmarking specialist Dr. Howard Rubin as he responded to questions about the major challenges facing the IT community during the next 10 years. Dr. Rubin continued:
“In short, we are entering a world of engineering that has increasingly greater potentials for massive financial loss due to poor delivery or lack of quality. Consequently, more of what we do in technology and software is going to have to be pre-assured in this new world. In the old world, when software was in its infancy, it simply wasn’t necessary. Innovation was enough. But in the future, delivery will be king.” His comments are from an interview in the IT Metrics and Productivity Journal (a special edition published in March 2006).
These insights, and the numbers supporting them, certainly obligate companies to better understand their entire IT service management system and tune its performance for maximum value. One vehicle to which many companies are turning to help accomplish that goal is the Information Technology Infrastructure Library, or ITIL.
If Dr. Rubin’s insights are accepted as correct, and it is further acknowledged that ITIL-based service management systems are being deployed to bolster IT performance, then it is logical to further ask:
The Information Technology Infrastructure Library consists of eight books and an introduction volume published by the Office of Government Commerce, Norwich, United Kingdom. The titles of the eight books are:
Throughout the eight ITIL books describing the process infrastructure, authors consistently emphasize – quite correctly – that there is no “one size fits all” for best practice process detail. One example:
“Best practice is usually derived from a combination of systems thinking, subject experience and an approach to business excellence by staff. They are hardly ever found solely through product training or technical libraries. The reason for this is that best practices have to be developed in the business context and environment…” (Best Practice for ICT Infrastructure Management).
Correspondingly, the books are written at a necessarily high level detailing what to do, not how to do it (though limited examples of how are given from time to time). It is within this space – creating and/or optimizing the how in an organization’s specific business context and environment – that Lean Six Sigma offers tremendous value.
Understanding the potential synergy between ITIL’s what and Six Sigma’s ability to optimally determine how, a global Fortune 100 IT organization managing an annual budget greater than $1 billion recently decided to utilize the Design for Lean Six Sigma (DFLSS) methodology to design and implement its configuration management and change management processes. When collecting voice of the customer (VOC) data, senior engineers and technology managers were immediately surprised to discover aggressive time limit requirements from employees for interacting with the configuration and change management databases. Continuing with DFLSS tools in the form of Y- to-x flowdowns, the team was further surprised that the supporting tool and database configuration they had originally envisioned would fall well short of the needs of the process and customers.
Because meeting these requirements also had been identified through VOC analysis as being critical for both performance and adoption, the decision was made to enlist the tool’s developers – a third party supplier – in a partnering redesign effort led by the DFLSS team. Other ITIL processes were viewed as “customer segments” for the change and configuration management processes, and detailed input and output requirements for them already had been developed as a normal matter of course in the DFLSS methodology. The team was able to roll these requirements into the tool’s redesign, thus having tightly integrated “hooks” into other processes available for later implementation. This also had the effect of priming the pump for continuous improvement activities and establishing a continuous improvement culture.
In the end, the process significantly exceeded employee expectations for time spent interfacing with the system, which made adoption easier. Additionally, there was a higher degree of tool-process integration than there would have been otherwise. This made follow-on implementations of the ITIL-based inventory and asset management processes that much easier. In the words of one engineer, “We finally have a tool that supports the processes, not processes supporting a tool.” Other critical-to-quality measurements, such as the accuracy of the configuration management database, time to complete a change and percent of bad changes, performed similarly well during pilot and deployment. All are currently monitored via dashboards during normal managerial review cycles as part of the Control phase.
While detailing all the benefits from DFLSS identified by the senior IT engineers is not possible in this space, there was solid agreement that the DFLSS tools and techniques drove them to explore details, ask questions and make associations they normally would have glossed over. Notably, the team’s processes have been recognized by a leading ITIL consulting firm as being beyond anything they have seen in the industry, and there is a U.S. patent pending for the configuration management process.
As this brief illustration demonstrates, ITIL and Six Sigma should not be viewed as separate or disparate investments, but rather complementary systems that leverage one another toward the common goal of world-class IT performance. CEOs and CIOs should investigate using this powerful combination to bolster their competitiveness during the next decade.