Whether the business unit is called information systems, information technology, software development or some other name, the group that provides these services is often among the most misunderstood and undefined in a corporation. And it is no wonder, considering that technology capabilities and dependence are rapidly expanding, combined with an ever-changing array of desktop and enterprise solutions, bridged by a plethora of hardware and software management tools.
While technology can be a wonderful thing when thoughtfully selected, skillfully implemented, and carefully monitored, it can equally be a burden if it is carelessly selected, rushed into service and never checked up on.
On the high end of the spectrum of examples is the impact on U.S. taxpayers of the following:
In February 2005, the FBI disclosed that it may need to scrap a $170 million project which would have given FBI agents and other homeland security professionals the ability to “connect the dots” for faster and more reliable information exchange related to homeland security issues, including terrorist attacks. One of the reasons cited for the failure of the project was that only 10 percent of the desired functionality was delivered in the final product (Source: CNN.com).
Ten Percent! In some circles, that is a 90-percent failure rate. Worse yet, nearly 30,000 brand new desktop computers and a state-of-the-art high-speed network are sitting idle, waiting for the system.
Poor Quality Has a Cost
How and why does this happen? Well, believe it or not it happens all the time – in both the public and private sectors. The simple answer is analogous to why inferior manufactured products in the 1960s and 1970s cost the United States millions of jobs in the automotive, appliances, steel and electronics industries. Simply stated, it was “poor quality.” Obviously, poor quality has a cost.
The cost of poor quality (COPQ) in the FBI situation will be $170 million (the write-off), plus the cost of wasted capital, plus the cost to create a replacement system. It is easy to see that the real COPQ for this $170 million failure will likely approach $500 million in taxpayer money. That is big money when we consider that just a few months ago the healthcare industry (which badly needs improvements to its IT infrastructure) was touting President Bush’s request for $125 million for “healthcare information technology projects” as a major milestone. Will the FBI debacle repeat itself in the healthcare initiative? In another public sector? Or in a private company?
Private industry has had its share of software and IT failures as well. Consider the AT&T switch failure (attributed to a software flaw in April of 2000), which caused a system-wide outage and drove AT&T’s (and switch manufacturer Cisco’s) stock price down considerably. A similar 19-hour outage at AOL drove its stock price down 4.3 percent a few years ago. This also was blamed on “software bugs and human error.”
|The Technology/Service Connection|
|> A Key Enabler||> Missed Requirements|
|> Productivity/Efficiency Gains||> Confusion|
|> Speed/Cycle Time Improved||> Duplicate Processing|
|> Quality Improved||> Errors (in Multiples)|
|> Communications Streamlined||> Poor Communciations|
|> Measurement Automated||> Undoing of Previous Gains|
|> Levels of Control||> Loss of Control|
|> Rapid Decision Making||> Lack of Buy-in|
DMAIC and DFSS allows the bridging of the technology/service gap
Other software development war stories include the failed development of a new reservations system at American Airlines and the trading system on the Tokyo Stock Exchange. Billions in investment capital was squandered on the COPQ of these and other large projects. An industry analyst (The Standish Group) conducted research that indicates that roughly 25 percent of all software and IT projects are scrapped and the remaining projects are significantly late and over budget. According to a recent interview with Dr. Howard Rubin, senior adviser at Gartner and professor emeritus at Hunter College/City University of New York, the IT industry will spend $1 trillion in 2006 with $500 billion on projects. This could place the software/IT industry COPQ at well over $250 billion for the year.
So what are the “quality issues”?
Stemming from a Lack of Understanding
In the manufacturing sector poor quality can mean shoddy workmanship (i.e., too much variation in raw materials or the manufacturing process and poor designs that are difficult to assemble). In software and IT projects, a majority of the failures experienced are related to the lack of a full understanding, agreement, documentation, prioritization and distillation of up-front system and project requirements – in other words, the definition of what the customer really expects this system to do (operational requirements) and why (business results).
Uninformed client managers often gauge project status (and even success) by how much software code was written (and how fast), how much hardware was installed, or how much network bandwidth was made available. All important items and measures, but somewhat meaningless in isolation. These measures of success (or metrics) drive a “deadline behavior” as opposed to a “requirements behavior.” This problem is so deep that it affects the accuracy of how companies budget, estimate, manage, measure and assign resources to software and IT projects.
Many in software and IT have fought the notion that their processes can be documented, measured or controlled. However, many early adopters of the Six Sigma methodology in IT and software have proven them wrong. Success stories are beginning to emerge from large IT organizations and software development companies including:
Sara Lee, where a large scale SAP implementation is using a Lean Design for Six Sigma approach. According to Sara Lee CIO George Chapplle, “We needed a fact- and data-based process to guide our development efforts, prioritize our decisions and verify our business case.” He turned to Lean Six Sigma for the answers after a successful experience at Heinz.
The key to these early successes is twofold.
Addressing the Causes, Not the Effects
The first is taking a little more time up front, to clearly articulate and identify the problem that the given technology is trying to solve. While on the surface this sounds simple, experience indicates that it is hard to execute accurately and efficiently. Most companies are wrapped up in schedules and milestones (as success factors), and incentive systems are often wrapped tightly around these notions. If the rewards are for checking off activities completed and milestones achieved without understanding the needed focus on achieving a business result, the system will never change.
Being busy does not correlate to getting results. Too many companies are busy tracking and chasing effects instead of causes. Why? Because effects are easy to see and there are not that many of them. They are obvious, but when an organization has a lot of deadlines to hit and no time to hit them, it is the natural thing to focus on. Consider that for every effect there may be 30, 60 or 100 causes – some controllable, some not, some isolated and some inter-related. The average problem-solver, given the pressure of a schedule, is typically overwhelmed by the task of figuring out which of these causes is most significant and what to do about it. So they either guess at which one is significant (by the way, their chance of being right with a guess is significantly less than 50 percent) or trying to do a little work on all of them (sub-optimization).
Yet, solving this dilemma is not really that hard and the statistics are fairly simple. The age-old algebraic equation says it all: Y = f (X1….Xn). If Y is the effect and the Xs are the causes, then why would anyone ever think that putting all the focus on the Y or guessing at which X is most significant would ever work. One thing Six Sigma helps with is understanding these relationships statistically so work can be directed at the cause or combination of causes (Xs) most likely to change the effect (Y). This activity helps break the endless string of firefighting brought on by never really getting to the root cause of any effect. This equation is at the heart of the Six Sigma methodology and with the DMAIC and DMADV roadmaps drives a company through a logical, sequential process to efficiently find the significant Xs and act on them. This gives the highest probability of success and helps turn the tide of reactive behavior.
Bringing Everyone into the Learning Environment
Secondly, to address causes, a company must find a way to bring stakeholders, customers, process owners and business clients along on this journey. Through inaction, software and IT organizations have taught these individuals bad habits with regard to project measurement success and failure. The result is that software and IT organizations are set up for failure before they ever begin the work. To change this organizational norm, we must change the dialog with customers to the quantitative dialog of business. This means communicating clearly about requirements, speaking in, agreeing to and documenting relative quantitative terms and actually modeling the desired outcomes.
Many of the Six Sigma leading-edge organizations are making progress on this front by creating shared learning experiences for business stakeholders and information systems, information technology, software development personnel. In practical terms this means that software and IT executives (CIOs and CTOs) must identify early adopters and willing partners from their business clients. They need to agree up front that they are going to try a new process (Six Sigma) that may shake up traditional project practices and cycles. From there, joint classes should be run to teach the Six Sigma methods, tools and behaviors and link their application to an active project. Linkage to other tools and practices also should be covered (things like information technology infrastructure library, competency maturity model, project management body of knowledge, etc).
Through this shared learning environment, the business clients and the information systems, information technology, software development community learn to appreciate each other’s viewpoint and discuss differences based on facts and data, not hearsay. In the end, projects run this way may spend a little more time up front but in the long run, make or save significantly more money, have less post-release support costs and are be far more feature rich for end users.
Conclusion: Time to Lose the Excuses
Some of the bold early Six Sigma adopters are making great strides and reaping the benefits. For example, IDX Corporation (now part of GE Healthcare) described financial gains in its first two years using Six Sigma as 2 percent to 3 percent of revenue – making the Six Sigma return on investment about $10 to $15 million (Source: iSixSigma Magazine, April 2005). Yet, most in the information systems, information technology, software development community are still using the old excuses, including:
- “We can’t be measured.” (Code for “We will not be measured.”)
- “Our clients dictate how we prioritize and schedule…” (Meaning “We do not want to have the tough data-based discussion.”)
- “We’re using good project management tools.” (Great, but what about the business results.)
- “Technology is changing too fast to keep up with.” (This is not going to change. Learn to prioritize, leverage and manage it.)
- “There is not enough time to do Six Sigma.” (Spending 80 percent of the time solving the same problem over and over sounds like a waste of time.)
Six Sigma is not a new methodology to solve new problems. Many view it as something on top of everything else they already have to do. It is not. Six Sigma is a tried-and-proven methodology to solve today’s problems efficiently and permanently. It is time to lose the excuses and address today’s big picture in software and technology.