Although information technology (IT) organizations can share commonalities at a general level, each has its own, unique set of characteristics interacting in ways that result in their own performance signatures. Even performance measurements are not identical. Each is its own system, its own environment supporting a specific company’s needs. Just as companies are diverse in their processes and needs, so too must be the IT organizations upon which they depend.
While the purchased processes offered valuable information, it was fortunate that the IT organization had seasoned professionals on staff that perfectly understood “systems thinking” and fundamental process engineering. The author now addresses the issue of why that was important. –Editor
The probability, therefore, that a specific process solution, orchestrated for one specific IT organization, would fit perfectly just “out of the box” within another IT system, are extraordinarily small. Dependencies upon supporting tools alone make it very remote. For the IT management professional, this means that importing a set of processes verbatim into an organization, without some kind of review for “fit” carries momentous risk. In fact, many professionals go as far as to say it is irresponsible in light of the probabilities for success.
The fact is that variation is a physical property of nature – and therefore describable mathematically through statistics. As a subset of nature, Dr. W.A. Shewhart theorized, business systems (manufacturing in his case) should be no different. He successfully proved his theory as described in his landmark book Economic Control of Quality of Manufactured Product.
He further demonstrated that each “system” had its own unique dependencies – Y = f(x1, x2, x3, x4…xn) in his terms – that resulted in that system’s unique statistical signature. As is now familiar to Six Sigma practitioners, as the x’s vary, so does the Y, thus producing a statistical distribution, or signature. If a characteristic is added or subtracted, or if a value for one of the existing characteristics is changed significantly, the statistical signature of the system can be altered. If so, then, by definition, it would be a new system, with its own properties and behavior as described by the new statistical signature. Even 100 percent identical processes as close as a building apart often exhibit different signatures, thereby informing business leaders that they are indeed separate systems with unique influences.
As any Six Sigma practitioner appreciates, the number of factors present in even a simple process are often numerous. Finding the critical few that actually impact and define the statistical signature is usually very difficult, as they tend to be hidden. Once found, knowing the precise value to assign to each of them in order to create the statistical signature – the system – of interest is more difficult still.
If this is true of very simple process systems, think of how many thousands more factors are present when considering an entire IT operation as a system, all of them interacting in tens of thousands different ways. Further, given the inherent physical property of variation in nature, what is the probability that two IT organizations behave identically at the “x,” or detailed, level?
With these laws of physics in mind, then, does an organization always have to begin from scratch every time it starts or expands its IT operation? Certainly not.
The effective way to utilize so called “best practices” is to understand the fundamental, critical characteristics of the practice that make it effective in its native environment, and then engineer those characteristics into the target environment to meet the new environment’s unique needs. The concept can be illustrated in a very simplistic example:
Assume you are a biological engineer and your Himalayan business partner has tasked you with engineering a highly effective hunting animal. The only major constraint is that the animal is required to be feline in nature. Oh, and there is the ever-present impossible deadline. One course of action, very attractive given all of the corporate pressures, would be to look at many consultants offering their “best practice” big cats from across the globe. Some of them come complete with cages, feeding systems and other similar integrated tools. Some even offer initial handling services to make sure the new hunter is deployed well.
After a quick-but-thorough review, you decide upon the mighty Leopard – a highly effective hunter with incredible performance numbers. So, you pay top dollar for this integrated solution – including a long-term relationship with its handlers and support personnel. It is guaranteed to work, right? You have seen its numbers in Africa and experts from around the globe concur on its effectiveness. Every organization is alike, right? So what works in Africa also will work perfectly for the Himalayas, right?
So, what happens when the Leopard is released into the Himalayan environment? It quickly succumbs, of course – a total roll-out failure due to the new environment. But why? It was highly effective – you saw the videos and numbers yourself. Not one expert across the globe dissented that the Leopard was a standard for effective hunting felines, and the consulting firm was highly reputable. It failed because one size does not fit all. What works in one system does not always work in another.
The alternative course of action would be to analyze the Leopard in its environment (its “system”), and identify those characteristics that make it effective – stealth, strength, camouflage, mobility and agility, for example. With this knowledge, you would then evaluate these traits relative to your system and its unique needs – the environment the design is expected to operate within. You would identify the critical x’s that impact these critical traits (CTQ’s) in your unique system, and then engineer them into your animal.
The result in this example is a snow leopard! Not 100 percent identical, but retaining previously mentioned critical traits, along with additional critical details that optimize its performance within its specific system at an operational level.
You might say, “Well, what if I’m just over the mountain ridge in Africa, and not the Himalayas. I could use the Leopard “out of the box” then, couldn’t I?” Yes, you certainly could. However, you would not know if it was a perfect fit or not without an evaluation of the process in its native system, and relative to your own at a detailed, operational level. Without this knowledge, success would literally be random chance.
An IT organization that built a configuration management process that was so good that it received a U.S. patent reached that point because the executive in charge directed that such an evaluation be conducted on the ITIL-compliant processes that were purchased. The company discovered that the purchased processes were lacking what was needed for a successful implementation in its environment. That was despite the fact that it dealt with a highly reputable ITIL consulting firm, in the business for years, and that had success with other companies.
Initially expecting minor modifications, the IT organization’s Design for Lean Six Sigma team quickly discovered that they were predominantly designing from scratch, incorporating, of course, useful characteristics from the purchased processes. In so doing, they ended up establishing a new standard and gaining a competitive edge through IT efficiencies.
Intense pressure for quick results can lull IT managers and executives into an out-of-the-box solution – especially since many big name consulting firms are advertising them. Though intuitively attractive, it is a siren’s call. The probability that a process suite optimized for another system also is optimal for every system is remote. The fast way to success is to understand “best practices” at a conceptual level, then utilize a performance-based methodology, such as Lean Six Sigma, to evaluate and optimize for the organization’s unique system.