Since the early 1970s software and information technology (IT) has been significantly changing the way products, processes and services are designed, delivered and maintained. At an exponentially increasing rate, software and IT have become the driving force in the way people work, recreate and educate. Most experts estimate this global “industry” at about US $1 trillion in 2006, with data indicating that the cost of poor quality (COPQ) exceeds $250 billion. Some estimate it could be as high as $500 billion. That means that 25 to 50 percent of every company’s IT budget is at risk.
The software and IT COPQ includes the cost of canceled projects, over-budget projects, late projects, unplanned and excessive maintenance and support, as well as poor efficiency and lost productivity.
According to Dr. Howard Rubin, noted author, educator and researcher in computer science and technology: “In short, if these productivity problems don’t start turning around the cost of poor project management is going to start exceeding the entire output of countries…demand for quality and performance is really going to force this issue once and for all. And the engineering techniques and disciplines will follow.” (Computer Aid, Inc. (CAI) interview published in March, 2006)
Six Sigma Is a Road to Results
The “engineering techniques” referred to by Dr. Rubin are succinctly encompassed in the Six Sigma for software and IT methodology being practiced by leading companies. For the first time in the software/IT domain, a fact- and data-based system to prioritize, problem solve and design new software based systems is yielding breakthrough results.
CIOs, CTOs and CFOs are slowly realizing that the same methods (Lean Six Sigma) which transformed manufacturing and transactional processes into smooth, high efficiency processes also will work in software and IT, if adapted to recognize the differences in theses processes and the rapid rate of change associated with them.
In parallel with the explosive growth of technology, older industrial, manufacturing and service processes (as well as products) were being transformed by TQM, re-engineering and eventually Six Sigma. The latter delivering amazing results in the processes and products in which Six Sigma was administered. Many improvement efforts targeted the manual core of products and processes (human resource, procedures, work instructions, flow, specifications, etc.), often ignoring the technology, which drives them, and in some cases working around technologies put there to help.
This likely occurred due to the lack of full integration and interoperability of these systems, processes and products. Organizational silos, departmental boundaries, personal preferences and incompatibility, driven by the lack of a consistent methodology to improve and integrate processes, products (and the technologies that drive them) further contributed to this problem.
The rapid rate of change in technology also promotes the reactive behavior paralyzing most software and IT organizations. In their pursuit to stay on top of the technology curve and all of its desired benefits, organizations continuously jumped to new solutions before the benefits from the initial solution were realized. Further, the advertised business benefits of these systems are rarely measured and audited to verify and validate results.
Causing Dissatisfaction and Inefficiencies
These actions often create large disconnects, frustrated employees, customer dissatisfaction and inefficiencies in the very processes and products they were intended to help.
IT examples of this dysfunction include companies which have spent millions on enterprise-wide system deployments only to run a shadow system of spread sheets. Is this really what was intended or expected? Unfortunately, many organizations are doing this. A company can check to see if this is what they are doing by taking an inventory of the reports, dashboards and spreadsheets that are generated outside of the enterprise software environment to measure and make decisions on recurring processes. Estimate the impact, importance and cost of these “outboard” systems. Is this not a cost of poor quality? What is the cost? All fodder for Six Sigma projects.
In the product software arena, consider products like cell phones, in which 6 sigma hardware (built on a 6 sigma production line) runs on a 3 sigma network, is then delivered with 2 sigma service and contains software so complicated that a software engineer cannot figure out how to use all of the functionality. In this case what is the quality level of the total customer experience? Perhaps 2 Sigma? Is having the innovation of a razor thin phone alone enough to drive sustainable market gains and financial success? While it once was, it no longer is.
The Six Sigma methodology can greatly improve this kind of performance by creating structure, prioritization and basic tools with which to more fully understand the problems that technology is supposed to solve, requirements that customers have and the measurable business implications of various concept and solution trade offs.
So it is clear that in many cases the process that now controls a company’s business processes is software and IT. It begs the questions “are we,” “can we” and “should we” be controlling it, and how? The quick answers are yes, yes and yes. But it is wise to explore the answers more fully and put them into proper context.
First, ‘Are We?’
When discussing information technology, if the answer is yes, the qualified answer is usually justified by the notion of service level agreements (SLAs), IT dashboards, call center and help desk statistics, resolution cycle times and bug fixes. Many times when looking at these metrics, the associated improvement rates are not evident, not consistent and many times not even viewed in that context.
How would these items be characterize if given the following choice? Reactive or preventative? Be honest, the answer is probably reactive.
Are these leading or lagging indicators? Mostly lagging.
Where is the preventative notion here? What drives a company to do cause-and-effect analysis? How can anyone sift through the plethora of variables to isolate a critical cause?
When will organizations realize that this process needs to be treated like all others? That means that critical variables need to be characterized, prioritized, understood, measured and controlled. Six Sigma provides the methods and tools to do just that.
In software development and engineering the qualified yes has references to testing, defects per thousand lines of code (DpKLOC), inspections, bug fixes, etc.
Again, mostly a “detect and react” type of answer. How often are strong disciplines and tools used, tools like concept engineering, language and context data collection or tools to discover where defects were inserted versus found. Companies spend huge amounts of time and money on testing but rarely do they focus on the point of insertion of defects to isolate cause. Further, companies rarely examine patterns of systemic defects, code and performance. Again, Six Sigma provides answers through the orderly and consistent use of proven tools.
Second, ‘Can We?’
Of course. And many early adopters have. What they have in common is leadership that is not afraid to confront traditional norms and conventional wisdom to solve business problems. They are willing to implement a strategy of change, invest in human capital, create a fact- and data-based environment and insist on performance and prevention – not simply complying with a standard. It is the only way to break the chain of reactive behavior. The human ability to change is governed by the desire to change. The desire often comes from a need, and a need from a company’s business situation.
Too often companies are driven by an antiquated planning process, which is subjective at its core, laced with intangibles and incapable of being measured. If this planning process characterization sounds familiar, the use of simple prioritization tools like failure mode and effects analysis and cause-and-effect analysis can go a long way to helping align business needs, customer needs and IT budgets. A higher level of prioritization at the front end of planning can help significantly reduce downstream failures.
Part of good planning is following up on business cases. In other words, planning needs to be a closed loop process, starting with needs and finishing with results. If the results are great, it reinforces the process and drives further change. If the results are poor, a company should learn why and do it better the next time.
But remember, if nothing is measured or followed up on, the process will not yield any improvement. The answer to the question, therefore, is “We can if we really desire to.”
Third, ‘Should We?’
Is anyone kidding? Just ask the question: Is it OK for a software or IT department to cruise along wasting 25 to 50 percent of its budget? And how acceptable is that if the company has beaten down costs (through quality improvements) in manufacturing, supply chains and transactional/service processes? Sooner or later, all of the other functions which have been “beat up” will start asking what software and IT is doing to help drive their fair share. Every company should quantify, report on and set aggressive targets to improve and eventually mitigate waste.
Sure, aspects of software and IT are different – but not that different. It is a lame excuse to say these processes cannot be measured. In a 2002 National Institute of Standards and Technology study on software and IT, more than 40 key metrics in six attribute categories were listed as being used by best practice companies. Detailed data is available across multiple industry segments. It is out there. Companies need to learn from this and start using it. It is this data which helps o answer the question: “Should we?”.
Some will argue the cost is too high or the resource commitment to great to change. This is a short-sighted and convenient argument. But it is not logical. Especially if it is an organization caught in the reactive, fire-fighting loop with late projects, canceled projects and over-budget projects.
Having answered these three simple questions, companies can be on their way to changing their processes to become more preventative in nature and more oriented toward results. This will lead organizations down a path to control the process that controls their success.