In case you wonder what follows MBB certification – but forget you your grammar you must. 9 years ago
I apologize for my ignorance, but can you give a tangible example of how you differentiate travel time vs response time? I don’t yet understand what the starting and ending events are for the two, or how the customer might be affected.
If we’re all lucky, maybe answering those questions will lead to the answer you seek. Usually I’m not quite…[Read more]
I fear that the Feds would indulge practitioners to pursue D, M, and A. However, any attempt to “move somebody’s cheese” during the I phase would be, as Roosevelt once said, like trying to punch a pillow. (And Gingrich may be adept at buzz words, but his actions speak of a mind set that is light years away from anything remotely LSS – not that…[Read more]
I think the solution may be to develop a standard that reduces subjectivity. This could be done in a team environment, perhaps even with inputs from a customer panel. Once that is done, you have the option of creating “standard” units for visual comparisons. With or without that, you should be able to train your inspectors in the consensus…[Read more]
One thought might be to develop a flow chart of the process with appropriate participants. Then identify which steps are “creative” and which are “routine.” Focus on the “routine” steps for the study. 10 years ago
Robin Lawton wrote a book entitled Creating a Customer-Centered Culture. It’s available through ASQ Press. It has a good methodology for developing a measurement.
If you follow that approach, don’t forget the “operational definition” – that’s the things you do to make sure everybody knows exactly what you are measuring, and how everybody knows…[Read more]
I’d suggest several thing. First of all, is the difference in mean times big enough that you might care about? for instance, reducing an emergency room’s treat-and-release time from 240 to 230 minutes probably doesn’t interest anybody, whether or not it is statistically significant. If the difference has no customer impact, there’s no point in…[Read more]
First display the data in control chart format. Sometimes “special cause” variation can distort a distribution. If it appears stable, then consider using probability plots with normal and log-normal to start. Weibull can be considered, but log-normal often does the job and it’s easier to work with.
Minitab has a distribution ID utility under…[Read more]
If you have non-normal-looking data, here are some steps I recommend:
– Plot on a control chart to see if blatant special cause exists. If so, that special cause may be the reason your data appear non-normal.
– If the data are stable (or if you stabilize a special cause process), and if the data still appear non-normal, use a utility like…[Read more]
I’ve used it to compare several options for re-stocking medical supplies. I also coached design engineers using it to select from among several options, and in a construction environment to select alternate flooring materials.
As always, Robert, your understanding of the topic is accurate and impressive. I’d like to build on what you say with a few odds-and-ends comments if I may. I tend to favor “traditional” matrices most of the time, but I have found a few times when Taguchi’s contributions benefit what I do.
No matter whether the matrix is Taguchi or…[Read more]
Wouldn’t it be feasible to manipulate the data in Excel to achieve similar results as the paper and scissors option?
As for distribution identification here is what I am thinking. I am sure some folks have seen me rant on this already, so if it’s old news I apologize in advance.
There are a number of statistical distributions in the world.…[Read more]
Let me see if I can re-state that accurately.
We compute the moving ranges as in an I-MR chart, and discard those moving ranges that would lie outside the control limits? Then we use the remaining moving ranges to estimate the standard deviation using d2?
That’s intriguing! I’m going to revisit some old data sets and see what that approach…[Read more]
I have been told that cycle time is one metric where mean and variation are correlated. My experience bears this out. In other words, if you reduce one, the other is sure to come down, too. That’s not a bad thing.
I just wish I knew whether this correlation has been proven to be “real.”
You might want to peruse Robin Lawton’s book [i]Creating a Customer-Centered Culture[i]/. It’s available from Quality Press, and also via Amazon. It has some pretty good approaches.
As for “selling” improvement to sales, consider checking out Fox’s book Dollarization Discipline.
I totally agree with Robert, as usual.
I would add another tool to your analysis: the control chart. Hypothesis tests never were intended to differentiate whether or not the data come from a “stable” process. If special cause variation is present the visual display can contain more useful process information that a hypothesis test would.…[Read more]
I agree with Stan, but with a caveat. I am confident each and every one of them claims to have done something they choose to call “implementing Six Sigma.” Where the challenge will lie is in figuring out which ones might have done a good job of it.
Greg Brue wrote a book on DFSS. It’s published by McGraw-Hill, and uses an acronym called IDOV. What I like about the book is that it’s fairly easy to read.
There are numerous acronyms: IDOV, DMADV, DMEDI, and so on. I agree with Stragydog to an extent: if you are hoping for a major creative breakthrough, luck may have as much to do with it as…[Read more]
- Load More