# P chart sample size

Six Sigma – iSixSigma Forums Old Forums General P chart sample size

Viewing 19 posts - 1 through 19 (of 19 total)
• Author
Posts
• #41925

newbie
Participant

I want to compare two operators based on their defective production. ( Here the defective is sales not closed). The sample per day is varying between 2 to 18 for each of them. Is it safe to draw a p chart considering the fact that a 0% defective on a sample of 2 doesn’t invoke as much confidence as that of a 10% defective on a sample of 18? Please advice.

0
#132039

Ken Feldman
Participant

Why aren’t you just doing a 2 proportion test if you want to look for difference?  Your sample sizes are a bit small to, as you say, invoke much meaning and confidence.

0
#132053

newbie
Participant

Thanks. But considering the fact that sample sizes are a bit small and day to day variation between sample sizes is high, what is the best way to determine whether each operator is doing the job in a stable way and / or when the instabilities are arising. The long term goal is obviously to improve the process capability of each of the operators.

0
#132054

newbie
Participant

I have done the two proportion test:
Sample   X       N       Sample p
1           36       61     0.59
2          38        59     0.64
P-Value= 0.543
Obviously there is no significant difference between Operator A and Operator B. But can I really be sure?  Power and Sample size calculation shows that I will need 2000 samples (instead of 60currently ) in order to detect a proportion difference of  0.05. Am I on right track?

0
#132066

Ken Feldman
Participant

That’s the problem with small samples of discrete data.

0
#132068

rodrigues
Participant

Hi
No, you cannot be sure that you have draw the right conclusion, as your betarisk is very high, the risk of saying there is no difference although there really is one.
I propose that you, instead of waitning until you have enought data, try to find the rootcause for the defects and make a pareto and work on trying to reduce the major reasons for not closing a item.
Waitning to get ‘enough’ data will simply take to long for you, and what would the effect be if you really find a difference compared to if you dont. I mean what is the practical effect of your conclution from the hypotheis ?
You will most likely get a faster improvment by just looking on reasons for defects at a pareto, sometimes the simple solutions is the fastest,

0
#132078

Peppe
Participant

If I well understand these are cumulative data for each operators. You can repeat the tests for daily results (define your best time basis) and check the results. Maybe I’m wrong, but you could also compare operators results using Fisher’s exact test for small samples, too see if there is significant difference between operators results.
Rgs, Peppe

0
#132098

Ritz
Member

newbie,
Sales environments like what you describe are tough places for data, but here are a couple of thoughts.  As with most things, it depends on your situation – resources, time, data availability, etc.
Consider the level of data representation required for impact.  I’ve heard the “PGA” method taught before – “Practical, Graphical, Analytical”.  Sometimes (and I try to make it rare!), practical significance must override the more detailed graphical or analytical work.  I don’t need to statistically prove that Person A outsold Person B if Person A had more sales at the end of the time period (day, week, month, etc.).  If Person A sold 59 and Person B sold 64, it’s easy for anyone to see.
That said, there a re a LOT of things to consider before making that kind of judgment:  monoline vs. multiline sales enviornment, profitability per sale type, compensation / incentive differences, NIBT to the organization, etc.
As with a lot of things, the more basline data (sales performance information) you can gather, the better off you will be.  You’ll see most sales people center themselves into a performance range (boxplots can be a nice graphic to depict spread performance between people).
However, do not neglect the wealth of sales process inputs: Lead quality, lead volume, tenure, incentive plans, training, education, employee satisfaction, etc.  Develop a predictive equation if you can (though don’t assume linearity!).  These are your keys to continuous process improvement efforts, and moving away from the unsustainable efforts like benchmarking best performers.
Lastly, don’t neglect any sales fulfillment or post-sale efforts that may have an impact on your output metric of “# sales”.  If your sale is a simple transaction (thank you, here is your widget), not many worries.  If your sale metric includes an application that then gets processed, with an order fulfillment, with shipping, etc., then you need to consider these things as they relate to your overall sales metric.
Good luck, and I hope this is provides some help.
Regards,
Ritz

0
#132108

newbie
Participant

Thanks for all the feedback.
If I am not wrong, Fisher’s test is one way Anova. To do it, how do I use the proportions data? Simply use the percentage figures as exact numbers (for example take 83 instead of 0.83) and do the comparison? Is this a valid method?
I tried doing a P chart for each of the operators and the points are all in control. But to me it seems that the control limits are too wide sometimes: LCL=0.10 and UCL = 1.0 whenever the sample size is small. This brings another question to mind: Is there a rule of thumb about optimal width of control limits for P chart? So that if the width is wider, we should suspect that significant differences are not really being recognized?

0
#132114

Peppe
Participant

Maybe I did a little confusion with my english, so I think is better to summarize again. I think that P chart you are building can be right, but general rules, says that you could recalculate control limits for each sample size, if it changes more than 25%. This is a way to control operators process, with all Pchart limitations. About how to compare the two operators results, if I well understand data provided are a cumulative, not a single sample. If so, what I said is that you could perform the proportion test on each sample and monitor how it change. No, for Fishers exact test I mean an analysis of frequency data, not Anova  (see link http://faculty.vassar.edu/lowry/VassarStats.html ).
Apologies again for my poor english and let me know.
Rgs, Peppe

0
#132117

Ritz
Member

newbie,
If your limits are too wide it is representative of extensive variation.  There is no “optimal” width for control charts – each is depedent on the process being monitored.  If your process behavior is so highly variable that you can not detect process shifts, you should focus your efforts on reducing process variation rather than shifting the process mean.  While efforts to do both at once are highly desireable, it is not always possible if you do not have all critical factors identified and measured, or if you do not have enough process data or knowledge.
My understanding of Fisher’s exact test is that it is used for contingency table data (chi-square), so I’m not sure that it would apply to your situation.  If Peppe has some additional thoughts, I’d welcome a more detailed view of how this test is applicable.
Ritz

0
#132174

Peppe
Participant

Hi Ritz, thanks for concern, but could you explane why it is not applicale to original question on how compare two operators performances based on number of failure and/or to expected performances ? Further, what do you suggest is the right test (we have already checked proportion test) ? My thought was to check the frequencies of failures/successes.
Rgs, Peppe

0
#132272

newbie
Participant

“There is no “optimal” width for control charts – each is depedent on the process being monitored.  If your process behavior is so highly variable that you can not detect process shifts, you should focus your efforts on reducing process variation rather than shifting the process mean.”
Yet, a decision needs to be taken as to whether the control limits are “too” wide and therefore it is really impossible to detect process shifts. How to make this decision? What is the criteria?

0
#132297

Peppe
Participant

Newbie, do you have a target ? Without target which criterisa you refer on ?  Which decision you can take, even if you “detect a process shift” ? Could you provide your raw data ?
Rgs, Peppe

0
#132354

newbie
Participant

Here is the raw data for you:PRP Day Date Leads Close P%C
B Thu 29-Dec 10 7 0.70
B Fri 30-Dec 8 5 0.63
B Sat 31-Dec 18 5 0.28
B Mon 2-Jan 8 7 0.88
B Wed 4-Jan 8 4 0.50
B Thu 5-Jan 2 1 0.50
B Fri 6-Jan 7 7 1.00
B Sat 7-Jan 6 3 0.50
B Mon 9-Jan 7 4 0.57
B Tue 10-Jan 18 10 0.56
B Wed 11-Jan 11 6 0.55
B Thu 12-Jan 7 3 0.43
J Thu 29-Dec 5 3 0.60
J Fri 30-Dec 6 4 0.67
J Sat 31-Dec 3 2 0.67
J Mon 2-Jan 10 8 0.80
J Tue 3-Jan 15 8 0.53
J Wed 4-Jan 10 6 0.60
J Thu 5-Dec 2 2 1.00
J Fri 6-Dec 8 5 0.63
J Sat 7-Dec 8 6 0.75
J Mon 9-Jan 4 3 0.75
J Wed 11-Jan 11 7 0.64
J Thu 12-Jan 8 5 0.63The operators are denoted by B and J. Proportions test and Fisher’s test says that there is no significant difference between the two. By drawing p-chart I intend to find time trends of instability for either operator.The target for both is 0.75. How to know if the control limits are too wide and what the optimum width should be?

0
#132367

Peppe
Participant

Looking to your data, I think that standard P chart isn’t the best way to monitor what you want (from it comes UCL and LCL excessive). I think you can calculate standard error of ‘failures’ and then use it for UCL and LCL for each operators. You’ll see different limits for different operators and out of limits will be highlighted. The test to use is F test for variance, that highligh differencies (with alpha=0,05,  p=0,02). Operator J is working better. If you provide an e-mail, I can send you a spreadsheet with some calculations.
Rgs, Peppe

0
#132420

newbie
Participant

Thanks Peppe
Feel free to mail me at [email protected]

0
#132449

Ruddy
Participant

Hi Pep,
Can u send me a spreadsheet with calculations? thanks lots
[email protected]

Michael

0
#132505

Ritz
Member

Sorry for the delayed response.  I stated that Fisher’s exact test MAY not be applicable.  I stated that for a couple of reasons, but primarily due to experience with sales environments.
My concern is using contingency table tests (like chi-square, Fisher, etc.) lies in the degree of association between variables.  Chi-Square tests whether two variables are associated – but in this case you will typically have a large degree of linearity between # sales closed and # leads, and the “failure” rate is rarely constant.  Given the small sample sizes, I simply urge caution.
Bottom line – you need more data.  Going back to my earlier post of Practical – Graphical – Analytical, I would urge you to to stop spending time trying to reject you null and instead focus on the Practical meaning of your data.
2-Proportion, 2-Sample t, ANOVA, ANOM, P-Charts, & I-MR charts will all tell you the same thing – no significant differences exist between your operators.  However, you can view your data in the practical sense and clearly see that J outperforms B.  Not only does J have a higher average close rate, but J does so on fewer leads and does so with less variation.  A simple box plot makes the story even more compelling.  If I were a sales manager, I would not hesitate to accept J over B…and I wouldn’t need statistical evidence to tell me that.
Run your sample size calculator if you really want to see how much of a sample you would need to observe a statistically significant difference between operators.  You just can’t get there with a sample size of 12 and have any kind of statistical power.  Never force a test to give you results you’d like to see.  Conclude that you need more data, provide input that clearly shows direction for future analysis.  Spend the time waiting for your sample size to increase by ensuring the sales inputs to each operator are measured and controlled for to reduce bias.  Spend the time identifying and quantifying sales process influencers.  You’ll not only gain a more thorough knowledge of your process, but will uncover other opportunities for improvement.  When your sample gets large enough, go ahead and statistically validate what you’ve already directionally observed.
Sorry – I’ll get off my soapbox now.  Good luck.
Ritz

0
Viewing 19 posts - 1 through 19 (of 19 total)

The forum ‘General’ is closed to new topics and replies.