iSixSigma

On Averages

(Or “On Arithmetic Means”, if you prefer.)

I hardly need mention in this forum that as a standalone descriptive statistic, the average can be a dangerous piece of information. Averages quoted in the absence of other descriptive statistics are generally insufficient at best, and downright misleading at worst. I’ll skip the lecture on the importance of variation and related topics (Wheeler already has it nailed in this book anyway), but I do want to focus on one particular aspect of averages that I find a lot of people forget. Including me.

A lot of the time, the average never occurs.

For example, the average weight of the four residents in my household is 31 pounds. No, we aren’t long lost Lilliputians – in fact, no one in the house (me, my wife, the cat, the dog, 6 fish) weighs anything close to 31 pounds. Or to paraphrase the old joke about statisticians, on average they feel like the temperature is just fine if they are sleeping with their head in the freezer and their feet in the fire.

To a statistician or Black Belt or anyone else suitably well versed on the topic, this is hardly earth-shattering news. But to some people, the fact that a statistical average isn’t synonymous with a “usual” or “typical” value within the population is surprising. And even to those of us who know better, this fact can be a slippery one to hold on to.

Mathematically, the fact that the average might never occur in a population is not hard to understand or explain. And I don’t have trouble remembering it mathematically. The mistake I tend to make is conceptual.

Handpicked Content:   Lean Product Development: The Customer Journey Value Stream - Part 2 of 4

Suppose, for example, you are designing a project tracking process. You might be using Excel spreadsheets and creating a home-grown solution for a dozen projects, or working with a vendor to roll out something much bigger and more complex for hundreds of projects. Either way, in my experience the tendency is to design the system for a project with “average needs” and “average complexity”. The problem is that the “average project” isn’t in your portfolio – it’s an entirely hypothetical entity. Design for it, and you’ll have a solution that perfectly fits a need that doesn’t exist.

Another example is assigning pre-work before Black Belt training. Sure, we might know what the “average person” needs to do – we’ve almost certainly designed our class for the “average participant” who has an “average background”, right? But our classes don’t fill with average people. They fill with some people who know a lot and don’t need much pre-work, or with people who don’t know much and need a lot, or for people somewhere in between. Assigning the same pre-work toevery personwill servenone of themadequately. We’d be far better off assessing each individual and offering a range of options based on specific needs. But for whatever reason, that’s almost never done. We get stuck on the “average need” and cater to it exclusively.

In both of these cases, the problem is that we use summary statistics like the average even though probably we shouldn’t be summarizing at all. If we are interested in accommodating every member of the population, what business do we havebasing anything onsummary statistics anyway? The entire point of many summary statistics is that they hide variation and complexity – they hide the mess, if you will. But sometimes the mess is exactly what we need to see and deal with. Which is counter-intuitive, and counter to a lot of the training we Six Sigma folks give and receive.

Handpicked Content:   Grapes of Sigma

(For some related thoughts on this topic, see Holly Hawkins last blog entry. Standardization is not always the answer!)

Comments 2

  1. Sue Kozlowski

    In our Green Belt training classes, we usually surprise some of our learners when we talk about "the average." Our initial teaching goes like this: Let’s say that you have a target cycle time for the Emergency Department of 2.5 hours from "door to depart." If your ED is hitting the target, that is to say, your cycle time average is consistently 2.5 hours, what have you achieved? The answer is, that half of your patients are being seen within the 2.5 hour target – and the other half are exceeding the target – in other words, your "defect" rate is 50%.

    This usually is startling to a good number in the class, who have been taught for many years that if you achieve your target average, you receive a "green light" on your department dashboard and all is well. Alerting them that the customer feels the variation, not the average, usually allows us to have very stimulating conversations, at least in the Green Belt classes!

    Thanks Andrew for reminding us of an important conceptual challenge.

  2. robert thompson

    A great post! Did you know that the vast majority of people in the UK have more than the mean average number of legs? I always warn of the dangers of blind reliance on mean average values as part of SPC implementation, especially X-bar R charts. Effectively an out of specification point could be taken but the other data points could effectively mask this. The other cons of the X-bar R charts are:

    • the the R chart does not look as the variation in the entire process over time.
    • If the R chart is out-of-control, it’s probable that a measurement system or process equipment failure has occurred.

    Check this site out for more on averages: http://tinyurl.com/3yo6w6

    Rob

    http://www.rob-thompson.net
    http://www.63buckets.co.uk (lean)
    http://www.qualityhero.co.uk (six sigma)
    http://www.linkedin.com/in/robertthompson

Leave a Reply