# Wrong use of P Chart?

Six Sigma – iSixSigma › Forums › Old Forums › General › Wrong use of P Chart?

- This topic has 15 replies, 9 voices, and was last updated 15 years, 2 months ago by vps.

- AuthorPosts
- December 7, 2004 at 3:32 am #37769
We evaluate 100% of an assembly using go/no-go criteria (Attribute). 100’s to 1000’s of these parts are produced a week. the yield ranges from 92 – 97% when in control, but occasionally is higher or lower than that.

what we are having trouble understanding is the calculation of the associated weekly p-chart. the limits have the # of opps located in the denominator, so the larger that # is the smaller the delta from the mean (the tighter the limits).

the question is whether it is justifiable to use a p-chart when your “n” is in the thousands? can i trust the result? how can i justify its use when “n” is in the thousands?0December 7, 2004 at 4:29 am #111860VPS,Well, I kind of hate to say it, but the process may not be in control! Suppose the process were in control with a long-term average of 95% passing the test. Then you would expect close to 95% in any sample you look at. Perhaps a few more or a few less, but not by much. If you look at subgroups of 1000 pieces, then the standard deviation would be about (0.5×0.95/1000)^0.5 = 0.0069, so the control limits would be +/- 3×0.69 = 0.021 = 2.1%, or 92.9% to 97.1%. With these numbers, a result of 92% would be out of control. It depends on just exactly what numbers you use, but this process looks like it is not in good control. There is presumably some factor that is causing definite changes from lot to lot. A big n doesn’t invalidate the results. A big n just makes you better able to detect changes.

Tim F0December 7, 2004 at 4:55 am #111862

GourishankarParticipant@Gourishankar**Include @Gourishankar in your post and this person will**

be notified via email.vps

p- chart is the right control chart to use when counting defective pieces using “go – nogo” method. If you look at the formula , the control limits are determined by the “proportion defective” . Hence , it can be used irrespective of the magnitude of “n”. If you can mail some sample data , maybe we could analyse the results. If the large no.s are bothering you , you may shift to a p- chart for “daily defectives produced”

Also , what do you mean by “the limits have the # of opps located in the denominator”? Are you measuring defects or defectives ?0December 7, 2004 at 8:44 am #111881

Titu JohnMember@Titu-John**Include @Titu-John in your post and this person will**

be notified via email.I do agree with your views , P-chart are used where you have propotions defectives type of data, and when the sample size is variable in nature. Some basic things which people forget is the common confusion in identifing Defect & Defectives.

Defects : Refer to any Non compliance to specification

Defective: A sample /product having >=1 defects can be termed as defective

If you can send in some data …..it will help in more clarity…

Regards,

Titu John0December 7, 2004 at 5:09 pm #111914Hi VPS,

The problem you are experiencing happens often with large sample sizes. One alternative proposed by Dr. Don Wheeler is to plot the p value for each week on an IX & MR chart (see his book, Making Sense of Data). He argues that this approach gives you a better sense of how stable the process really is.

Perhaps you could try this approach and let us know how it turns out.

Hope this helps.0December 7, 2004 at 5:40 pm #111916The rule of thumb is np>5 and n(1-p)>5. It looks as if your weekly chart has no issue with that.

I would be interested in how you know that .92 – .97 is in control.

Also – what do you seek to learn from the chart?0December 7, 2004 at 11:59 pm #111974

vpschroederMember@vpschroeder**Include @vpschroeder in your post and this person will**

be notified via email.How does one go about this when you have attribute data to start with? I have seen this recommendation by Pyzdek as well but havent applied it..any website you suggest would help..

0December 8, 2004 at 12:04 am #111977

vpschroederMember@vpschroeder**Include @vpschroeder in your post and this person will**

be notified via email.Stan I forget if that rule applies on when to use the normal distribution as an approximation…?

0.92 – 0.97 is the average and calculated control limits after removing data points that we believe have assignable causes. more of an engineering decision than a rigorous statistical one, based on our knowledge of the process

I suppose your next question is how did we determine that if we only have 52 weeks in one year…?

this is supposed to be a high level representation of whether the process is in control, and when it is not, a general reason as to why is desired, for analysis and corrective action0December 8, 2004 at 5:39 am #111985

GourishankarParticipant@Gourishankar**Include @Gourishankar in your post and this person will**

be notified via email.vps

The Nov. issue of QP carried an article by Forrest Breyfogle III – “3.4 per million” . Page 87 shows two control charts – p chart and ImR chart for the same data set ( page 86). The p chart shows process out of control whereas the I chart indicates an in – control process ! The moving Range chart is not shown.

The author argues t”X charts are not robust to non – normal data” and hence the data may require transformation. Maybe you need to check on this.

0December 8, 2004 at 5:45 am #111987The p chart assumes a normal approximation.

The rest was curiosity as I suspected it was not statisitically derived.

Go with the p chart with your sample sizes – you are fine.0December 9, 2004 at 4:05 am #112073

FerrellParticipant@Ferrell**Include @Ferrell in your post and this person will**

be notified via email.Stan

I believe the p chart follows a binomial distribution except for large np0December 9, 2004 at 4:17 am #112074Your assumption would be wrong

0December 9, 2004 at 3:02 pm #112117

FerrellParticipant@Ferrell**Include @Ferrell in your post and this person will**

be notified via email.with all due respect I believe I am not wrong. the probability functions are not the same for a binomial distribution as the normal distribution.

in fact referencing the 5th edition of “The Management and Control of Quality” it states on page 575:

“Although the binomial distribution is extremely useful, it has serious limitations when dealing with either small probabilitiies or large sample sizes…as the sample size gets large, the binomial distn approaches the normal as a limit…the normal approximation holds well when np>=5 and n(1-p) > 5”0December 9, 2004 at 9:31 pm #112131In your original comment, I believe you stated that the P chart was predicated upon the binomial distribution. Possibly what Stan was referring to is what you just stated, that the need for np>5 moves it more to the normal approximation. The binomial is skewed until you hit p=.5 then it is approximately normal. Since the control limits for the p chart are symmetrical could that imply that the p chart is based a bit more on the normal than the binomial?

0December 10, 2004 at 1:58 am #112152alas even though i was able to cite academia, im not a steady practitioner of SPC. therefore my original question.

however mathematically speaking, i can state from my understanding the binomial function and the normal function are distinct curves, and have distinct functions that describe them.

that they become similar for certain values of p,n etc. just makes it convenient for analysis and calculations.

0December 10, 2004 at 2:01 am #112153even though ive had great responses as to whether or not it is still OK to use the pchart equations for extremely large values of n, to me something still doesnt click. for example, in practice why would anyone take a 3000 size sample of something?

i mentioned in my original posts that i was trying to see if the use of the p chart was correct for the data set i had–which was essentially the entire population!0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.