My guess is they do not have much experience with snowballs in Delhi.
I have a idea though, think about the failure cost of a product. At the point where the failure occurs the cost is the manufacturing cost at that point. As the product moves downstream more content is added until the failure is detected. If this is an assembly, additional parts…[Read more]
Although I am sometimes criticized for being too simple and liberal with interpretations that make sense, I wonder if you had a SixSigma project.
Clients have asked me what makes 6S different from other approaches to process improvement. I tell them there a 2 distinct requirements for 6S; financial measures for results, and…[Read more]
When in doubt, I suggest using the t-test. It is more conservative than the z-test and as the sample size gets bigger it approaches the z-test anyway.
Textbooks recommend using the t-test when the sample size is less than 30. But, why is 30 a magic number?
Hi, I don’t know if your post was directed specifically at me, but I think it deserves an answer anyway.
I am a strong believer in “Economic Design of Control ~Charts” I have written a lot on that subject.
SPC is not for a process that does not vary. (Is there any such thing?)
SPC is not wasted on “good” processes. It depends on the frequency…[Read more]
Hello M. Rao,
Capturing COPQ in a form is a difficult task because nobody likes to report it. The reported COPQ is generally like the visible part of an iceberg.
I worked at one company where each department head was trained in capturing and reporting COPQ. They then began reporting it to the best of their ability. The result was astounding,…[Read more]
I thought the rule was that the average number of defects per sample should be 1 or greater. If he has taken 2400 observations, he probably has a good idea what the p-bar is. If it is less than 1, find a sample size that will make it 1.
Hello Ron, Here the is way I see it. (Not always right but
seldom completely wrong.)Well, he has process capability to base his control
chart on. Rational subgrouping is not a
requirement. We are looking at variation within the
subgroup and the variation between subgroups.Individuals charts are ok….but, you can’t tell
the difference…[Read more]
Michael Mead replied to the topic What should be the Big Y for a scrap reduction proj? in the forum General 11 years, 9 months ago
It seems you are at the “define” stage yet talking
about the “measure” stage. In almost every case,
scrap comes in different flavors. I’d find out the
defect type of the scrap, maybe a Pareto
diagram…look at combining them as clusters based
on cause or corrective action. That is the analyze
Hello Coko,I am not sure you understand the concept of control
limits. They are for averages. They are not
directly related to individual parts. a CpK of 2
does not mean that all parts will be within the
control limits. I wish I could help you more, but
this is a basic concept of statistical process
control.In which country do you reside?
Yeah, you need to know your process capability. I
think what you should worry about are control
limits. Is that what you mean by “process limits”? I mean what are you going to do–run all the time
and sort the products that are above 145 and
through them out? KC is correct. aim for the
middle and maintain control. That is all there is
Michael Mead replied to the topic Cost of quality-assigning $ to supplier performance? in the forum General 11 years, 9 months ago
That is an excellent question Ed. I have worked in
several places where they had some scheme to
classify suppliers. None of them really worked–
there were always too many exceptions. If you ate
brave and willing to try something new, think about
this: Why not make an “Impact Priority Number” for
each occurrence. It would be similar to…[Read more]
One thing you can do is set the intercept at 0. Thus, no hours implies no output.
I did this regression with the intercept set at 0, the coefficient for x is 2.2. Not a huge difference when compared to your weighted average. However, if this data represents your real process, you don’t need a simple linear regression. Simply squaring your i…[Read more]
There are many cases where either chart is correct. It is a matter of choice.
Generally you get more information from control charts using continuous data. Large sample sizes are needed to detect process shifts using attribute data. However, if the data is discrete, often you will get false signals on a chart for continuous data.
I recommend…[Read more]
- Load More