Those familiar with software implementations know that they seldom go as planned. Delays are common, cost overruns are endemic and failures are frequent. Why is this so common? What can Six Sigma do about this?
According to Capers Jones, internationally known software management author and consultant, the two most common causes of software project failures are customer requirement problems and estimating problems. Two project scenarios that feature common pain points provide examples of just how Six Sigma can address these aspects of software implementation. Case 1 examines some of the ways Six Sigma can help with customer requirements. Case 2 is about the role of Six Sigma in schedule estimating.
Case 1: Understanding the Voice of the Customer
Message to the software implementation team: “Management has told us to select and install a new enterprise resource planning (ERP) system by the end of next year. We started six months ago and it feels like we’re getting nowhere. We can’t get our customers to agree on requirements. They’re very vague about what they want, and they seem to keep changing their minds.”
Design for Six Sigma (DFSS) incorporates many tools that help project teams understand the “fuzzy front end.” One of the common misconceptions concerning Six Sigma is that it is all about statistics. In reality it is about the disciplined use of any type of information – quantitative or not.
A DFSS approach to customer requirements is fundamentally different than that typically practiced in software deployment efforts. It does not begin by asking the customer, “What are your requirements?” It begins by Six Sigma practitioners asking themselves, “What do we need to learn?” At first glance this may seem a fine distinction, but in practice it leads to a very different mindset, which in turn leads to very different outcomes. Implications of this different mindset include:
- Beginning with the customer’s goals/business objectives. This orientation devotes careful attention to desired outcomes in quantitative financial terms. It is true that most projects begin with a high-level statement of business objectives that initially motivate the project. But it also is true that this focus is quickly left in the dust as the implementation team rushes off to interview prospective system users to “find out what they need.” Most projects quickly lose sight of the what-would-this-be-worth? perspective. The team loses the connection between “what users say they need” and how successfully those stated needs will drive the intended business outcomes. The Y-to-x tree and the cause-and-effect matrix can be used to address this issue.
- Creating a fact-supported basis for selection and prioritization of functionality to be delivered. This point of view recognizes the risks of being adrift in the ebb and flow of the political tides inherent in every organization. Six Sigma tools that facilitate this include prioritization methods such as analytical hierarchy process (AHP) and conjoint analysis. The latter in particular is an effective tool to determine customer “utility curves,” which are related to choices among complex multi-dimensional alternatives. Tools such as extensions of Pugh concept selection are used to create a formal structure for evaluation of alternatives. This promotes a visible, fact-based decision process. Six Sigma tools can greatly facilitate governance and decision-making in complex multi-organizational selection processes.
KJ analysis, for example, is a formalized application of rules of abstraction that is a powerful tool for understanding the meaning hidden in fragmentary and incomplete language data. DFSS also incorporates needs/context analysis – a method for discovering unstated or latent requirements. These often are the source of opportunities to “delight” the customer by providing capabilities and solutions to problems that are often missed by traditional requirements elicitation methods. Six Sigma extensions of use cases provide another tool that enhances capability to identify the customer’s critical-to-quality requirements (CTQs).
Y-to-x trees and cause-and-effect analysis are tools that can be used to gain insight into the connection between specific features or capabilities of a software system and the business outcomes it is intended to drive. DFSS leads to a far more concrete, quantitative, fact-based discussion that results in delivery of the features that really matter (the CTQs) and exclusion of the gold-plate that adds to cost but contributes little to delivered business value.
Properly applied, the concept selection scorecard facilitates a fact-based conversation that examines the relationship between business value of prospective functionality and the cost/schedule/risk implications of delivering the functionality.
Case 2: Understanding Process Capability
Message to the software implementation team: “We’ve just slipped the scheduled live date for the third project in a row, and our CEO has made it clear we’re going to get outsourced if this happens again. What can we do to make sure it doesn’t happen again?”
While it is recognized that the estimating problem interacts with the requirements issue, for the sake of clarity, the assumption here is that the requirements are known and stable, so the estimating problem can be considered in isolation, independent of external factors.
It is good to first explore estimating problems in generic terms, independent of the peculiarities of software-related projects. For example, suppose one wanted to estimate how long it would take, and what it would cost (in gasoline) to drive a certain vehicle from point A to point B. This seems a fairly simple challenge: Look up the distance, find out the miles-per-gallon rating of the vehicle and do the math. As a check, the estimate is run by a truck driver friend, Mr. P. He says, “Wait a minute, partner. What about the road and the traffic? What speed are you gonna drive? What’s the load in the vehicle? Where did you get those miles-per-gallon numbers, anyway?”
Oops, maybe this is not as easy as it seemed. Translating this back to the world of software implementations, each of the issues raised by Mr. P can be explored in a Six Sigma context.
- “Miles per gallon” might be called capability in Six Sigma terms. The inherent efficiency of the vehicle is the project team’s experience, training, leadership, methods, tools, etc.). The Six Sigma idea of capability quantifies the relationship between a customer’s requirements (e.g., fuel efficiency) and the team’s ability to satisfy that requirement. Each customer requirement relates to a different aspect of the team’s capability.
- “Load in the vehicle” is the magnitude or size of the problem to be addressed – the quantity of functionality to be delivered.
- “Road and traffic” is the organizational realities which must be faced (size, locations, politics, authority, governance, etc.). Different roads might demand different capability – wide and straight will tolerate some weaving about, curvy and narrow will not. A defect level acceptable for an accounting system will not be adequate for an aircraft guidance system.
- “Speed” is the relationship between the actual schedule and a nominal schedule. One convenient way to define “nominal” is to equate it to a least-cost/least-risk option. In driving terms, this might be 40 or 50 miles per hour for many vehicles/drivers/limited-access roads – the schedule that minimizes the risk of crashes and maximizes miles per gallon. In software project terms, this is the schedule that minimizes cost and delivers a quality product with an acceptable level of defects. Slower is cheaper and safer, but the trip also is longer.
With these analogies in mind, looking though Six Sigma glasses provides another view.
- Capability – A project team’s capability can be quantified by reference to past performance if the same or similar team has done the same or similar projects in the past. In the world of software implementations, that may be a big if, as it is common to do them with different teams in each instance. When historical data is available, it can often be used to get at least an approximate measure of capability, albeit often accompanied by a large standard deviation. When local history data is not available, it is often possible to use industry data from various benchmarking sources and/or from commercially available software project estimating tools.
- Size – This is the “weight” of the project. If there is not some quantification of the project’s magnitude, one of the most important x’s in the Six Sigma fundamental equation Y = f(x1, x2,…xn) is missing. In other words, predicting the schedule or cost (Y’s) without knowing size means it is being done without one of the most important drivers in a fact-based, quantitative way. Most software implementation projects never quantify size. This contributes significantly to errors in estimating, since the estimates are merely guesses and guesses are rarely accurate.
- Organizational Realities – This is another important set of x’s that are rarely quantified, but potentially can be. One software firm has made a science out of characterizing the attributes of each organization implementing its software. The company characterizes more than 20 factors such as number of users, number of locations and degree of executive engagement. By using Six Sigma to assess the realities of an organization, the firm has improved the accuracy of go-live predictions from about 50 percent to more than 90 percent.
- Speed – There frequently are situations in which the schedule for an implementation is mandated from the top down without regard to any measurable x’s. Executives who do this apparently believe in magic or are in denial about the laws of physics as they apply to software implementations. When management demands a three-minute mile when the world record is something like 3:40, it is not malicious, just ignorant. Six Sigma modeling methods can change wishful thinking into a serious fact-based conversation. The facts to be faced may be unpleasant but are realities nonetheless.
The Common Theme
Six Sigma, more than anything else, is about “managing by fact.” All of the pain points discussed originated in a lack of facts. All of the solutions rested on obtaining, analyzing and acting on facts – not on fault-finding, finger-pointing or mass executions.
Probably some readers are thinking, “Ok, that’s good theory, but what do I do if I don’t have the data I need?” The answer is to recognize that no data is data. No data just means the project has un-quantified variables. That, in turn, means management must resort to a tool such as Monte Carlo simulation that will allow the modeling of the potential consequences of a range of possibilities.
Even when no one knows the appropriate value to assign to a variable, at least a worst-case/best-case/most-likely value (perhaps with a probability distribution) can be assigned. And then with that information, the range and probability of potential outcomes can be modeled.
Understanding the uncertainty of potential outcomes is, after all, one of the most important facts any manager can have in his or her possession.