How to Identify Maverick lots statistically
Six Sigma – iSixSigma › Forums › Old Forums › General › How to Identify Maverick lots statistically
 This topic has 11 replies, 8 voices, and was last updated 20 years, 4 months ago by Sridhar.

AuthorPosts

January 15, 2002 at 4:45 am #28526
Hi,
I hope someone can help me with this task that i have. I need to set up a system of identifying maverick lots (or ‘abnormal’ lots) based on yield. I have attempted to use the “Mean +/ 3 Sigma” method but get butget strange results that i dont think is useful for catching such lots.
As an example for yiled data collected over a month, i get following results,
mean=95.2%
sigma=7 (therfore 3 sigma=21)
the control limits to classify a lot as maverick or abnormal would then be if the yiled exceeded 116.2% (95.2+21) or fell below 74.2% (95.221)
With such loose limits, i am afraid i wont be able to catch any true abnormal lots. If I wait for the yield to fall below 74.2%, I would be passing many abnormal lots in the meantime. Also, by this calculation, I should be not be considering a lot with 100% yield as abnormal, when i know very well that such a lot is in fact abnormal.
Anyone can help me with a good statistical method here.
Thnx0January 15, 2002 at 6:30 am #71241The idea is to identify an abnormal lot from a population of lots manufactured.
Yield and targets are a function of the quality of the output you desire. Whats abnormal for you is not abnormal for the other.
In predictions, one uses a sample and then predicts based on a certain sample that the populaion mean would lie within sample mean +_ 1.96 sigmawhereas if one wishes to classify whats defective and whats not then the senario is diferen.
Lets say a target mean is S.
S+_ 3 sigma would then mean whats aceptable to you is any otput that falls in that range. You can reduce the rangebased on the accuracy or quality you desire.
This should solve the problem.0January 15, 2002 at 8:35 am #71244
James AParticipant@JamesA Include @JamesA in your post and this person will
be notified via email.Hi Anon,
If I read your problem correctly, here are some suggestions.
Determine your acceptable quality level – i.e. what percentage ‘defective’ is OK for a given lot size (assuming that your process will allow you to determine this before adding value to the raw material)
Use Attribute Single Sampling Tables to determine a probability of acceptance of ‘c’ or fewer samples in a sample size of ‘n’, and run your sample inspection accordingly.
The purists might say that the true aim is to remove sample inspection all together (and therefore reduce your costs) by improving supplier quality – so the next question must be whether the supplier is an ‘internal’ or ‘external’ process to further define what the problem is, and how best to fix it.
If you have multiple problems on the same part, use a Pareto chart to identify the attributes that are causing the most pain, and ‘kill’ them first before moving on.
For a source of attribute sampling tables, and other equally useful stuff (good bedtime reading – you’ll be asleep in two minutes) try ISBN 0333428250 “Statistical Tables” by J Murdoch and J A Barnes.
Hope this helps.
James0January 15, 2002 at 12:53 pm #71248
Tom BlackMember@TomBlack Include @TomBlack in your post and this person will
be notified via email.It would be helpful to know if you are dealing with the yield of a chemical reaction or with the yield by counting a number of parts. But in either case, you have to understand what you mean by “abnormal”. Is that “out of spec” or just “economically too low”?
The standard deviation describes the process. (Are you sure you got a good sample of the process?) It doesn’t have anything to do with the specifications or economies. If 3 std. dev. goes down to 74% that simply means that your process can vary between 74% and 100% without any “special cause” variation. If that is not an acceptable level, you have to study the process to see how it can be changed to improve it.
If there is no special cause variation, there will probably not be one “silver bullet” that will fix the system. You will need to identify the inputs to the system, test them to see which ones have the largest effect and then control the key inputs.
Tom Black, MBB0January 15, 2002 at 2:23 pm #71251
John SmithParticipant@JohnSmith Include @JohnSmith in your post and this person will
be notified via email.There are a couple of separate, but related, issues. One is the distribution of your process data, and secondly your definition of maverick or abnormal.
First, have you looked at the distribution of your data? It is probably not normally distributed. Note that your data is bounded at 100% and the mean is very close to the bound, yet the standard deviation is high. Most of the variation must therefore lie below the mean.
Is the distribution skewed? Are there mulitple modes? If skewed you probably need to transform the data (BoxCox may work here nicely) or use a different distribution model. Doing that should give you more representative and useful control limits. If there are two or more modes, then you probably have yourself a nice six sigma project with potential cost savings and customer benefits.
Secondly, you must define abnormal lots not just from the distribution of the process data, but also based on your customer specifications and expectations, the end user’s engineering application, your internal company specifications, and the business economics. The other factors – specifications, applications, economics, expectations – often play the primary role in deciding what is abnormal, not the process data distribution.
Where the circumstances allow it, I believe it is best to define “abnormal” from a technical and economic viewpoint first and then overlay that definition onto the process data distribution. This can be eyeopening sometimes.
It is possible to have a process running perfectly in control and still go broke.
John Smith, BB
0January 15, 2002 at 5:12 pm #71259
J_BelgiumParticipant@J_Belgium Include @J_Belgium in your post and this person will
be notified via email.Hello Anon1,
You are saying that you are gathering data from a whole month. Taking the mean and put control limits around with the use of +/ 3 sigma is correct. BUT, just as Tom said, you must have a normal proces, and I believe that is not the case.
It is very simple to recognize the abnormal variation in your proces. Calculate the yield over a short period, for example every four hours or every eight hours. Plot all the data points then from that month in a control chart. Use the individual control chart because you are dealing with yield. You will imediately see the changes in your proces. Take a period in the chart where you were running stable (let say 15 data points, or even less). Calculate the sigma, and you can use it to calculate the control limits. Now you will be able to identify the exceptional variation in your month data.
I don’t know if you have software, like Minitab, to support your data. If you do it must be very easy to do.
It seems to me not possible that your data over a whole month is stable (or normal) enough to calculate the control limits. But I don’t know your data, so I can make a mistake. People are saying to easy that they have a controlled process. You can use a normallity plot to check the data. Be aware that even long term variation could be normal, and let your month data looks normal. Please observe your control chart. There are also rules to identify exeptional variation.
Just as Tom is saying, look for the inputs they are driving your yield output. That will give you the abillity to create a more stabile yield, and narrow down your variation. Then you will easely identify what you call “Maverick lots”.
Hope that this will help you.
Succes.
J_Belgium0January 16, 2002 at 1:52 am #71267Hi, thanks to everyone who has contributed to this thread.
I now understand, that my main problem may be due to the fact that the data I am using may not be normally distributed in the first place, thus screwing up the results of (mean +/ sd) and making it useless. So I was wondering, maybe the solution was for me to ensure that the distribution of the yield data was normal in the first place.
So how do I ensure that the distribution of this yield data is NORMAL? Someone suggested that I should use a normal probability plot to determine if the distribution was normal. But my question is: what if the probability plot showed that it was NOT normal. How do I make it normal? Should I drop the outliers and replot until I see a semblance of normality and then only use the data (minus the outliers) to recalculate the Mean and Sigma? Is this method allowed?
TIA0January 16, 2002 at 3:51 am #71269Hi,
Why dont you try for some transformations like Box – Cox. Generally this transformation makes the data closely to normal. Then you calculate your control limits and convert back to your original values.
sridhar
0January 16, 2002 at 9:00 am #71270
James AParticipant@JamesA Include @JamesA in your post and this person will
be notified via email.OK, time for another crack at this.
If the data you have is showing you your distribution is not normal, then that is the process you have – my view would be NOT to ‘massage’ the data so that it conforms to what you expect as the data reflects the process you have. As soon as you start filtering the data it will not represent the process, but something from ‘Wonderland’.
It’s old fashioned by today’s standards (no need for PC, laptop, calculator or batteries) but I would try using a lognormal distribution plot on probability paper for extremely skewed distribution – this will give you both your distribution of x, and also give you the expected drop out rate above and below your 3 sigma limits, plus the current ‘capability’ of the process. A picture speaks a 1000 words etc. and those charts are very visual.
Once you know – or can visualise – where (i.e. at what level) drop out occurs you may then be in a position to start determining the root cause of the variation via DOE. Said chart will also give you something to plot against when determining ‘where’ on the distribution your next lot is.
I freely admit to writing this on the fly whilst waiting for the coffee to kick in, and also to never having come across a genuinely skewed distribution yet, but there may be kernels in here (that or I’m nuts!)
Sounds like a fun challenge, though.
Regards
James0January 16, 2002 at 12:11 pm #71277The distribution could be skeved because it could have two or more samples contained in the one he measured. So it is important to do a IMR chart to see if the process has changes during time. If this happens, you should take a deeper look in your datas, to understand what is happening with the outliers and why they are existing. If you realize that it doesn’t exist a 2nd population in your data (that could be skewing your graphic) this is a big opportunity to identify problems in your process.
Regards,
Jan0January 17, 2002 at 3:14 am #71287the boxcox transformation method may just be the lifesaver for this problem. would appreciate if someone could tell me how it is done or point me to a website that teaches such a method.
tia0January 17, 2002 at 3:50 am #71288Hi,
Box cox method is used to make the non normal data to norma.
You can go through this handbook
http://www.itl.nist.gov/div898/handbook
this gives you some information about the box cox method. After that you can use any statistical pakcage like MINI TAB etc which will do the transformation.
thanks
sridhar
0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.