Craig
@HACLMember since April 3, 2010
was active Not recently activeForum Replies Created
Forum Replies Created
-
AuthorPosts
-
August 27, 2010 at 10:45 am #190672
To be honest, I don’t think there is any magic answer. I would go out to the factory floor and do time studies so I could understand the data. Maybe it is bimodal because the second shift has fewer distractions (No managers around).
0May 12, 2010 at 10:56 pm #190146Sounds like an intrinsically non-normal variable. (due to tool wear?)
The only way to expect normality is with real time feedback in the process were you correct for tool wear.
Even with feedback / correction, you still might experience non normality if you are over-correcting.
Going back to the question that intitiate this post:
If your trying to find out if your scale is measuring accurately….this would call for a bias study
If you are trying to quantify precision, do a GRR.0May 1, 2010 at 10:16 am #190103Improved Cpk from my experience has been achieved by minimizing variation.
This is done through process characterization and optimization, controlling input variables, etc. No simple task!Although a mean shift towards target can improve Cpk, I have rarely found processes with these types of quick fixes. (Although one of my projects where this was true was the highest impact change I have implemented)
Regardless, Six Sigma focuses on both.
(reducing variation and shifting the mean where necessary)0April 30, 2010 at 10:17 am #190097It looks like your data is bi-modal
I’d say that is affecting your normality test moreso than rounding
0April 21, 2010 at 10:30 am #190020can you post your data?
0February 21, 2010 at 10:40 am #189576That seems unrealistic. If your population of potential hires are all in the unemployment line, that is one thing. If they are currently employed, they would most likely give the current employer a 2 week notice which leaves you a remaining time of 0 days for your recruiting. I would keep it at 4 weeks and try to improve the quality of your selection process.
JMHO,
HACL0February 20, 2010 at 10:24 am #189559Please DEFINE the problem better. 14 days for recruitment….what does that mean? You actually want someone interviewed, sent an offer letter, and sitting at a computer within 14 days of the creation of the job opening?
0February 19, 2010 at 10:46 pm #189555nine sigma?
you must be joking0February 19, 2010 at 10:34 am #189535A high leaker will show more variability over time…correct? This is why the measurements are less repeatable for the high leaker. It does appear to be variables data, not attributes data and It doesn’t appear to be a destructive test either. I think your comment about leak rate changing over time hits the nail on the head. (the part in the study is changing throughout the study). I’d like to see Mikes suggestion (how do the leak rates plot out by run for number 4?)
Also, if I were the original poster I would be studying the formulas in the XBAR/R versus ANOVA methods to understand where the estimates come from. It really does appear that the method for estimating variation between the two approaches is a factor in this case.0February 15, 2010 at 1:00 pm #189402I have seen interactions, but in the context of the GRR with Factor1 = Operator and Factor 2 = part, they are “interesting” to explain!
It is usually a case where the measurements are not taken in a random order. An operator makes a setup error and measures the same part 3 times in a row with the same set-up error. Very serendipitous to say the least! A botched GRR reveals information about set-up issues. When fully randomized, I haven’t seen issues with interactive effects.
I am not sure about all the watershed stuff!0February 14, 2010 at 1:07 pm #189374Please identify the correct statement below
a) 15.7 + 17.8 = 23.7, not 33.5
b) 2.4 + 3.2 = 5.6
c) 5.6 + 94.4 = 100
d) 23.7 + 97.2 = 100, not 120.9
e) b and c are correct
These values come from Wheelers article on page 12.
I think his point is that you should use the correct computational methods and if you indicate that A + B comprises a total. Why shouldn’t %GRR + %Part Variation = 100%?
Is 120.9 close enough?
This has nothing to do with the Range versus ANOVA method. I have to chuckle every time I think of the operator*part interaction anyway. Operator A has a fear of the number 5 and every time he measures part 5 he breaks out in a cold sweat and measures it erroneously. The ANOVA method is the preferred method no doubt, but I can’t get over the interaction thing!0February 6, 2010 at 11:21 pm #189129You might want to do some homework before stating this mumbo jumbo about unprecedented steps. I read that Toyota said these same pedals were not a safety threat in 2007 and 2008.
0February 6, 2010 at 10:58 am #189106The recall involves more than 8 million vehicles globally. How can this only be a perception problem?
0February 6, 2010 at 10:17 am #189105Depends which other threads are being considered a nominees.
The title of this one is intriguing. It goes back and forth between a sitcom and a soap, so that might ruin its chances.0February 4, 2010 at 9:40 am #189066Is this a soap opera?
0December 18, 2009 at 10:52 am #187630December 10, 2009 at 10:02 am #187451Interesting…
Now you have a repeatable and reproducible measurement system comprised of two perfectly correlated measurement tools, and they are different!
Just out of curiosity, what is the tolerance on the parts and what is the range span of parts in production that you chose for the study? What was the % RR as compared to the tolerance (P/T ratio)?
0December 9, 2009 at 10:29 am #187417sounds like you are evaluating repeatabilty/ reproducibility as well as bias between you measurement systems.
-what do the % values represent below? (% tolerance, % study variation,?)
– how did you select your range of samples?
– what was your total sample size?
0November 14, 2009 at 9:40 am #186812Mike,
I recall alot of hype about one of the few high volume products back in the day….FMU139.
Did you see any differences in the six sigma rollouts between Comm and Tactical? Recall that I was sheltered in PWF trying to make good boards for all to use!!! Polyimide, fusion bonded PTFE multilayers….what an interesting job to say the least!
0October 15, 2009 at 9:12 am #186130Process Lead Time = Wip / Exit Rate
I get in a line at Disney of 50 people and the exit rate from the Ride is 2 people per minute. I have 25 minutes to wait to get on the ride. (thats my lead-time)
If I am a can of baked beans and I am stuffed on the shelf behind 51 other cans…the WIP is now 52. I am not the best tasting type of bean and people buy them (exit rate) at a rate of 1 can per week. My lead time to see the daylight again is 52 weeks. (unless someone feels like they need to pull the cans from the back for some reason)
If I am filling orders for a customer and my factory is capable of cranking out 5000 widgets per day (Exit rate) and there are 50000 already in wip, I tell the customer it will take 10 days to ship the order.(lead time)
Comments?
0October 4, 2009 at 9:38 am #185890October 3, 2009 at 9:04 am #185882I did some research and found that there is a cheerleading squad. I interviewed all of them and found a strange trend.
All of them are named Eileen. Now I am a fan too!0September 30, 2009 at 8:53 am #185818Grand Master Black Belt? Wow. Maybe he has grandchildren who are MBBs.
At least the Grand Poobah in the Water Buffalos (Flintsones) is an elected position.0September 29, 2009 at 9:09 am #185793Question 1: You would need a set of standards for this. (Items of known values). It would be the same approach as a linearity study, but with additional analysis. Example: I have a set of standards for oxide thickness and they are certified at 1000, 1200, 1400, 1600, 1800, and 2000 angstroms.
Conduct a linearity study, but you could also use a t-test to see if the measurement equipment can detect a difference of 200 angtroms. I’d suggest a series of paired-comparisons. Your null hypothesis would be
m2 m1 = 200
The biggest challenge is probably obtaining the standards!
Question 2: There are many threads on GRR for destructive testing already. You might find a solution….The only advantage I see with hardness tesing is that you might be able to take repeated measurements in close proximity on an experimental unit and consider them replications. (Unlike a tensile test, where you use it once and it is toast)0September 25, 2009 at 12:26 am #185688Mike,
Same email account. I think you had 2 defects. (emails that you didn’t respond to)
You could count all junkmail items that you could have fowarded to me as additional opportunities I suppose.
Planning the May fishing trip already my friend!
HACL0September 24, 2009 at 3:05 am #185648Mike,
You owe me an email or two. How many opportunities do you need?
:-)
HACL0September 23, 2009 at 8:49 am #185611I thought of doing a google search, but figured it would be better for the professor to explain things like this to the student.
Was the student asking for help or quizzing the six sigma community? :-)0September 23, 2009 at 8:39 am #185609why can’t you get more data and do the analysis correctly?
0September 23, 2009 at 8:33 am #185608never heard of data that is co-integrated. multicollinearity is a condition where the X’s are correlated. Is that what the prof was referring to?
I would get my moneys worth with the professor and have him or her explain this concept.0September 18, 2009 at 9:27 am #185525Go back to the person who made this request. Ask him or her how to put the samples back together again perfectly (kinda like humpty dumpty), and then assess the repeatability of the measurement system with some precision.
I must confess that I did an experiment in 1989 where the response was a destructive test and I did not do a GRR! However I did randomize the order in which I measured the experimental samples.The signal far outweighed the noise in the DOE. Plotting residuals versus measurement order didn’t highlight any non random behavior.
You are in fact stuck with a nested approach with no way of repeating a measurement.0September 17, 2009 at 8:58 am #185495This thread seems stranfe
0September 13, 2009 at 9:53 am #185398One approach is to capture the data in a binary format for each 30 minute or 1 hr interval during the day. (1 = equip used, 0 = equip not used).
There wouldn’t seem to be a large number of factors driving the use of the equipment. (Operator available?, materials available?, tool in working condition? Other tool selected?, etc.)
Logistic regression is OK if you want to confuse the heck out of everyone in your organization, except you! :-)
This might be more of an OEE question.
Good luck0September 11, 2009 at 5:22 am #185362Depends what zone the points are in!
0September 7, 2009 at 11:21 am #185235Don’t try to force the application of specific tools.
I grabbed one of my text books from the shelf, sneezed from the dust, and opened to chapter 10 (Categorical data analysis). The text is “Statistical Methods and Data Analysis” by Ott and Longnecker. You could look at differences in your response by shift, day of the week, etc. You would of course have to know the underlying distribution of the data (ie binomial)
0August 27, 2009 at 11:22 pm #1850451) depends on how many critical outputs you have
2) Depends what the process is. are you baking pies or machining parts, etc?
3) Read up on model fitting
4) Read up on hypothesis testing, anova, etc0August 11, 2009 at 3:21 pm #184804Why reinvent the wheel? I would invest in a few software licences.
0May 22, 2009 at 9:50 am #184345How about controlling input variables? You have to demonstrate that if the inputs are on target and in control, the Y’s will be adequate. You might be able to reduce your sampling this way.
How large are the rational subgroups and what sort of time period do they represent? This could be the reason you are having issues with the chart signalling too much. (poor estimate of standard deviation)0May 21, 2009 at 10:26 am #184298The answer is yes, as long as you have the data to support the change. If it is a highly capable process it is worth pursuing. It is a matter of protecting the customer and keeping your line running in a stable and economical fashion.
Learn the concepts of ARL, alpha risk, OC curves, etc. If you are a good process engineer, you have reduced variation such that the limits have been progressively tightened. There is a point at which it is not reasonable to tighten the limits any further.0May 20, 2009 at 10:05 am #184252I can’t recall ever calculating the p-value by hand. (Used a table with interpolation as Stan mentioned, or obtained directly from software application). Seems like the technique they want you to use should be in the class notes.
0May 16, 2009 at 11:21 am #184192There is nothing in this series of posts that will convince me to even care about continual improvment. No offense to anyone intended, but “Continuous Improvement” is all encompassing enough for me.
What’s the difference between 5S, 5S+, and 6S? Has anyone gotten into that ridiculous debate?
5S can be a great initiative.
5S+ is still great if you want to add safety and I’m OK with that distinction. (Even though safety should always be a consideration)
Now we have people who use the term 6S in place of 5S+.
Hence we have 6S intiatives and 6s intitiatives. Our children are being thrown into a confusing world.0May 16, 2009 at 10:59 am #184191Makes sense to me! No one seems to care about precision to tolerance ratio any more! I would look at that next.
I have seen the other extreme where the parts chosen are 2X of the tolerance and the study variation is low as a result of that.0May 14, 2009 at 10:15 am #184113122.001 versus 122.003
Are we talking about a difference of .002 microns?0May 12, 2009 at 10:10 am #184020the null hypothesis would be mu = 0
if p is low (less than .05) you reject the null.
If you reject the null, you should also see that the 95% confidence interval for the mean-difference does not include zero.0March 26, 2009 at 10:42 am #182767The original poster should study the formulas to answer this question.
Short term sigma will vary based on the choice of subgroup size. Also, since the range is used for estimating sigma, only 2 data points in each subgroup are used for the estimation.
Long term sigma will be the same for a given data set as it does not involve subgroup size, RBar calcs, d2 factors, and it uses all the data.
Try crunching the data below and calc cpk and ppk using subroup size 3 and then 5. Try using the XBar/S approach for subgroup size 9.
Random80,2.580.271741381.912070679.170107676.886771779.310134379.819943383.339142481.014008283.34329478.359133582.656855981.297196277.433550478.812705781.170546279.860023482.013038380.77011379.728586180.323561377.460475576.497974380.972126777.33562781.302931279.484176878.805654182.325127177.693351482.585998580.306328976.591258975.872435176.154374282.124018576.382226878.756533879.931679284.756554278.732030479.972229779.138389578.087130576.729063981.147533479.568373379.960751479.081382578.064946481.521382476.789109277.711547680.753827278.03083483.316858179.210368577.688889780.906081378.558076278.25072777.261488581.659291482.934349481.55761177.813372276.828578875.420678778.849201778.137084880.235490380.699264481.899598581.554661580.445139775.302697979.874024380.506680682.773424174.091067183.713889784.288995880.571852280.384882483.648579381.308760576.463769678.918963881.687990980.054462580.75560790March 21, 2009 at 11:33 am #182603When p is low, reject Ho
That’s all you’ll ever need to know
How poetic!
Oh by the way, if you set alpha at .05 and the p value was exactly .05…and you fail to reject the null…and you stopped investigating this X variable…..YOU’RE FIRED0March 19, 2009 at 11:19 am #182510The null hypothesis for one way ANOVA is that all MEANS are equal. (The alternate is that at least one mean is different).
I hope everyone realizes this. Equal variances is a condition that must be true for the F ratio to be valid. (Which is also tested with an F-ratio) A typical approach is the Bartett test.
I set up a quick JMP table and set the X as nominal and then set it as ordinal. (Both fall under the non-continuos category). In each case, JMP used one way anova for the analysis. If your X was continuous, regression would be the obvious choice!
0March 19, 2009 at 5:24 am #182507Burichi,
Why didn’t you ask this question before you ran the experiment? I have performed factorial experiments in the SMT industry where the quantity of solder balls was the response. (Count data). You should always anticipate how you will analyze your data before you collect it.
HACL0March 18, 2009 at 10:17 am #182484What did your data tell you? Are the average times between agents different?
If you have collected data on handling time for several agents, you can start with a one-way ANOVA to test your hypothesis. This will give you insight as to whether your AHTs are different or not. Test for equal variances as well.
I agree with another poster that you should be concerned with more than just the mean time for agents.0March 12, 2009 at 10:30 am #182282Just curious as to the percentage of out-of-spec material you are producing across all grinding machines? Do any of the grinding machines have acceptable cpk values?
Some basic first steps:
Look at your data and see what the major sources of variation are ( within piece, piece to piece, etc) If you didn’t collect your data to show this….design a multi-vari study and run it.
It is difficult to help you with such limited information.
I could just say “Apply six sigma methodologies!” I assume the M phase is complete and you know that your measurement system is capable!0March 10, 2009 at 10:05 am #182200OFAT = (1×5)*5 = 25 for a single replicate
Full Factorial 5x5x5x5x5 = 3125 for a single replicate. With OFAT you vary one factor across it’s 5 levels and hold everything else constant. You miss interactive effects this way.
Try a screening design first!
0March 5, 2009 at 11:02 am #182019This reminds me of a statically indeterminate question from engineering school. It cannot be solved. Please note that the underlined word is not a typo.
Now if I think statistically:
What is the response variable of interest? Y = earning potential, hot air expelled, boardroom presence (somebody actually told me an MBB needs this characteristic…..oh my goodness)
If you conducted this study, how well could you control the knob for X? If you randomly select 10 MBBs, would you consider them categorically the same? This is similar to running a DOE with temperature as an X variable, and never knowing if you are at your set-point. If you did an analysis of residuals by factor, would you see equal dispersion for the PhD as for the MBB?
I took the liberty to anwser like a geek since this is a web site for those who should understand statistics.
HACL
0March 3, 2009 at 1:48 am #181882Scary indeed. Looking up t-critical from a table is not such a major task. I blame it all on the computers that spit out p-values. We have just become plain lazy and don’t care about t-tables any more!
Imagine if the question was about F-critical? numerator degrees of freedom, denominator degrees of freedom….the debate would last forever :-)
0March 2, 2009 at 12:08 am #181841Engineer….your first inclination is correct:
It is a one tailed test because the alternate is mu < 200.
Df is 11, and you find t-crtical under the alpha = .01 column in the table.
If the alternate was mu not equal 200, you would divide alpha by two, and look in the .005 column.
0February 26, 2009 at 10:56 am #181723By the way….we should be optimizing processes and looking for ways to eliminate opportunities for error.
I want to make it clear that my last post contained some sarcasm. (like changing the opportunity counting method to achieve a goal. Unfortunately, I have seen this game being played)
0February 26, 2009 at 10:50 am #181722Jsev607,
When your goal is changed to 1.4 PPM, that is when you try to find ways to increase your opportunity count. (Sad but true)
On a much larger scale, how could a company say it is setting a goal for six sigma quality if it is almost impossible to achieve. If it is truly 2 ppb, how many parts have to be produced to validate this?0February 25, 2009 at 3:59 am #181666If I take a purely statistical stance, I think of it this way.
What is the probability of any given value in a normal distribution? Well, that would be zero. What is the probability of exactly a 1.5 sigma shift? Likewise, that would be zero. (OK..I am stretching this)
Zero is the amount of credibility I have in the 1.5 sigma shift. Please accept the fact that it is a fudge factor that allows one to approach six sigma quality.
w/o the fudge factor, you need to achieve 2 PPB. With the fudge factor, you get sneak by at 3.4 PPM with some creative opportunity counting.0February 24, 2009 at 10:55 am #181627What if there are 10 beers to choose, but the last 2 are bud lights? Does the 9th person have a freedom of choice? The 10th certainly doesn’t!
99 degrees of freedom on the wall, 99 degrees of freedom, you take one down and pass it around, 98 degrees of freedom on the wall.
98 degrees of freedom on the wall, 98 degrees of freedom, you take one down…….0February 21, 2009 at 12:34 pm #181546Raja,
Good reference article, but I don’t think this issue is “within-piece” variation. I’d be surprised if CI guy could sub-divide the unit under test and take this approach…but only he can answer that.
CI Guy,
I had a similar issue with trying to characterize a torque analyzer. The “parts” in the study were torque wrenches. We had a hard time discerning whether the wrench was delivering a consistent amount of torque or whether the operator’s technique varied, or whether the gage varied. We agreed on a technique of 3 repeat measures by the operator, and used the average (just as Stan has suggested here). This is how we monitor the wrenches in production. (Qualify the tool with average of 3 readings…this is how you must use it!)
Keep in mind that averaging can be a problem if your data is non-random. I have seen this with SEM equipment for measuring critical dimensions on wafers. The data is highly auto-correlated due to the build up of surface charge from measurement to measurement. Before using this approach..make sure the measurements are not autocorrelated. (We could explain this observed variation, but I am still not sure how your part varies. I also know that if I run a GRR on my bathroom scale, my beer drinking will impact the weight measurements. I also step on the scale without any help, so I am the operator and the part! There’s that weird coffee again.)
If I were you, I would run a quick multi-vari study.
Days: 1 through 5
Operator (within-day): A, B, C
Part (Within operator, Day) 1,2,3,4,5
Repeat (1,2,3,4,5)
The error term in your nested ANOVA will represent the gage error and your suspected “noisy” part variation. I am not a salesman, but for this analysis I would use JMP. You will see if your data is autocorrelated, you can plot the data on a control chart and assess the stability of the measurement system, etc. Why is everyone so biased towards R&R? (Get it…that was an MSA joke!)
MSA = Linearity, Stability, Bias, Repeatability, Reproducibility0February 20, 2009 at 11:02 am #181498CI guy,
Look at my very first post and see if I am making an accurate analogy to your measurement problem. Stan thinks I am drinking some kind of “special” coffee, but I checked and it’s plain-ole maxwell house! :-)
0February 19, 2009 at 4:35 pm #181454I can drop some in your boat on the way to my Key West fishing trip in May! (Mike C told me you have a nice rig down there)
0February 19, 2009 at 11:33 am #181434CI Guy,
I researched a little on PMD and I think I have a grasp on the measurement technique. Can you see if my analogy makes sense?
First, is it true that PMD is an indicator of how many impurities there are in an object? Here is my analogy for your measurement challenge. If I am way off-base, just advise me to not give up my daytime job!
I have 10 parts that are simply enclosed chutes that are 5 feet long by 1 foot wide and are fully enclosed. Their meaing in life is to deliver pingpong balls down a 45 degree slope. Laws of physics say that the balls should arrive at X seconds after departure. It is critical that the chutes deliver the balls in X +/- 3 seconds. The problem is that the chutes have random obstacles in them, and the arrival time of the ping pong ball can be changed. You also have this measurement gage called a stopwatch which is a new invention and you are not sure if gives reliable readings. You see all sorts of variation and you are not sure if the times vary because of the obtacles in the chutes or if this new thing called a stopwatch is reliable. (boy am I stretching it!) You have no other way of understanding the impurity level of the chutes other than with this technique.
Measurement gage is stopwatch (one ball is used throughout the study)
Parts are chutes with unknown levels of impurites
Does this make any sense or should I brew another pot of coffee?
HACL0February 17, 2009 at 11:02 am #181301Pay no attention to the sigma calculator behind the curtain (Well…not behind the curtain but at the top of this web page). If defects per opportunity are important enough to be in the form of a calculator on this web site, then why can’t someone ask about PPM calculation?
I will give the poster the benefit of the doubt and assume that if they are measuring the supplier’s performance, then they are aimed at seeing improvement. Question 2 will be how to construct a pareto chart!0February 13, 2009 at 11:21 am #181156I would track monthly, and some cumulative number based on 3 or 6 months.
Line 1 on your chart: Monthly PPM
Line 2 on your chart: Cumulative (total def 3 months / total parts 3 months, expressed in PPM also. This would be a moving window of course).
0February 10, 2009 at 2:25 am #180929There is no such thing as a one-sided interval, is there?
0February 8, 2009 at 12:21 pm #180827hint:
.05+.05=.10
1-.10=.90
0January 5, 2009 at 11:17 am #179261I agree that widening the control limits makes sense as Gary suggests. It also might help to see what textbook the customer has based their statement on! I don’t see the value in shutting down a process for a statiscal anomaly when you are miles from the nearest spec limit.
Use the airport analogy with your customer. Assume the runway is 500 feet wide, and statistically your 3 sigma limits are now +/-5 feet. Let’s shut down the airport if we land 6 feet from the centerline!0December 23, 2008 at 11:48 am #179036Andy,
Who changed the definition of cpk? Not sure I have read that one. I’d like to see the Japanese definition and the new one you are referring to.
And speaking of new metrics beyond cpk, what do you call ppk and “sigma” level? I am not saying that I agree with these metrics, but you are asking the arrogant authors to “call it something else”, and that is what they have done.
HACL0December 12, 2008 at 11:14 am #178641Badger,
From 160,000 DPMO to 115,000 DPMO? Sounds like you have reduced defects to a moderate extent. (If I used the handy dandy Sigma calculator correctly).
Do your defect paretos follow the 80/20 rule? Usually there is low hanging fruit at the beginning, and the smaller-incremental improvements are achieved well into your continuous improvement efforts.
You could be suffering from low-opportunity-count-itis. A deadly six sigma diesease :-)
HACL0November 27, 2008 at 11:39 am #178141Let’s not forget the use of the t-test for regression coefficients!
0November 6, 2008 at 10:42 am #177435We should have listened to Rocky Balboa a long time ago. (We can change!!)
I recommend a LSS initiative on the government. (or even a Kaizen Blitz). Tell you one thing, I’d sure hate to see one of their spaghetti diagrams. We’d have a world shortage on pasta.0November 6, 2008 at 10:33 am #177434I think Stan’s issue was with 1%. The statement should have said less than 10% is preferred. (R&R/Tolerance, or P/T ratio)
Before you release a measurement system to a production area, you should do a 5 part MSA. (Repeatability, reproducibility, linearity, bias, and stability).
When you estimate cpk, the measurement variation (which as been deemed to be low I hope) is included in the estimate of your standard deviation.0November 3, 2008 at 10:47 am #177342Sick Sigma
Opporunity count inflaters
Deming Slogan Committee
Just a few options
0October 29, 2008 at 10:37 am #177167The xbar and r chart is monitoring “between” group variation and “within” group variation. The limits are based on the “within” varation, which is based on the range of the subgroups.
You want to see evidence of assignable cause on the xbar chart, which is the part to part variation that you hopefully designed into the study. If you don’t see assignable cause, this means you picked a poor selection of parts (all the same), or your measurement process has poor R/R and the “within” variation is large.0October 17, 2008 at 10:42 am #176794I think you are going to find it difficult to analyze a bunch of 1s and 0s and get any useful information. I suppose logistic regression is the way to handle this kind of response, but I have never used it.
If you can rate the samples on a scale of 1 to 7, you might be better off!
0October 16, 2008 at 9:52 am #176764Think about this example with simple linear regression. You are studying the relationship of adult males height (X) and how that variable is related to weight (Y). You study a range of X between 5 feet tall and 7 feet tall, and choose 5 ft, 5.5 ft, 6 ft, 6.5 ft, and 7 ft as the “settings” for X. You want to do lack of fit, so you randomly choose 5 men at each of the heights. Can you visualize these 25 height values as being normal or a uniform dist? It is only the residuals that have to be normal, not the Y or X values.
0October 12, 2008 at 7:55 pm #176668Seems like the use of median is the best option if you truly have a skewed distribution.
Are your time study values showing a boundary at zero or are you studying events that are substantially above zero? Don’t assume just because you are dealing with time, that the natural boundary will impact you!0October 11, 2008 at 12:10 pm #176651how about saying your cpk is 0.66?
0October 9, 2008 at 9:44 am #176575The best way to understand this is by dropping the letter F from shift. Read what it says a few times, and be enlightened.
0October 8, 2008 at 10:13 am #176538try arranging a demo of laserfiche or a similar product. i think you can scan documents and actually have data read from the doc, and added to a database. If you dont have the electronic form of the original data, this could be the next best thing. The assumption is that the data needed is legible and always in the same location on the form
0October 4, 2008 at 11:17 am #176416Location Location Location
Good food at reasonable costs
Good service
Good atmosphere
You can’t fix the top one on the list very easily, but the others are what you should look at.
What changed over the last 3 months? Fewer customers, same customers but everyone is ordering from the kids menu?
0October 1, 2008 at 6:27 pm #176340Never heard of Charles Hicks, but I learned from Montgomery. Designing experiments by hand is lots of fun, especially when you are asked to define the alias structure for a 2 7-3 fractional factorial on paper. (One of his PhD students had me do this, not Montogmery). I think I could have proved that I “got it” with a one half fraction instead, but I did burn through lots of lead on that one)
I use JMP for recursive partitioning also. What’s wrong with design expert?
0October 1, 2008 at 4:02 pm #176336The fumbling is on the part of the original poster who probably is not as proficient with excel as you are. You hit the nail on the head regarding the version of Minitab in use.
Two questions:
How do you know if you could have done this in excel in such a short time when the poster never indicated what “this” is. We don’t even know what analysis he was trying to do. Maybe it was recursive partitioning or something like that.
Would you design an experiment in excel or would you use Minitab, JMP, or Design Expert?
0October 1, 2008 at 9:41 am #176312If I am a Black Belt, am I not supposed to be able to get to root cause on things more readily than mere mortals? Geez….how hard is it to figure that if you have a student version of software, that it might not have full capabilities? I hope that the req was already placed for the current, fully functional version of minitab or JMP, and Bryan is not fumbling with excel. Won’t minitab or JMP easily pay for itself if used by a black belt on one successful project?
0September 26, 2008 at 5:08 am #176162the GRR applies to the measurement system. (the torque analyzer).
It is not intended to assess the capabilities of the torque wrenches.
is the customer interested in how well you torque things, or how well you measure torque?0September 17, 2008 at 10:09 am #175843It’s only fitting to answer Matt’s question as if I were a statistician, and preface it with “it depends”! (only poking fun here)
If you look at Pyzdek’s Six Sigma Handbook, there is some good information on roles and responsibilities. Regarding the MBB, one of the items states” Technical expert beyond black belt level on one or more aspects of process improvement (e.g. advanced statistical analysis…………”)
I suppose it depends on the types of problems one is faced with. Is your environment data-rich, how complex are the problems, do you need to run highly fractionated DOEs?
My advice is to research the source I mentioned above, as well as ASQ and other reputable sources for overall MBB roles. If you are a PhD statistician, you are not necessarily a Master BB. If you are a Master BB, you are not necessarily a technical expert in advanced statistical analysis. Confused yet?
More humor:
A) No stats knowledge
B) Some stats knowledge
C) Advanced stats knowledge
D) PhD
E) Any of the above
F) Any combination of the above
G) Statistically indeterminate
0September 17, 2008 at 9:44 am #175842Some good answers already, and here are more tidbits.
When a process is capable can it be improved, if so how?
A: Widen the spec limits (Just Kidding) B: decompose the variability and see if the biggest source is within piece, piece to piece, or temporal…then look for optimization alternatives. Obviously you have to prioritize your improvement opportunities before jumping the gun.
And what does out of control actually mean?
A) A process that I designed (Just kidding) B) Look at the control chart (half kidding) C) Stan hit the nail on the head on this one. Unpredictable.
As an operator what needs to be done to bring the process in control?
Ressearch what is meant by OCAP. Hopefully the chart designer thought about this before rolling the SPC program out to the floor
Is there a point in a control chart where one would say its too out of control to recalculate control limits.
If your process is unstable to a great degree, you have no business using SPC to begin with. SPC charts can be used as part of characterization, but on-line SPC with a good selection of rules and some well written OCAPs are really targeted for the control phase.0September 15, 2008 at 10:16 am #175757if you need a network based system where data can be collected automatically from metrology equip, try WinSPC.
manual data entry is also possible. this tool has many off-line analysis tools as well, and very good report generation capabilities.
0September 13, 2008 at 12:35 pm #175737Mike,
I liked your post but do you really think it was a stupid question ? Back in the Moto days, it was the statistical wizards who were the “elite” group. (Actually it was the statistical wizards who used the tools successfully who were the “elite”!) Today, you just have to know someone who knows someone who knows about statistics.
To the newcomer I can see where this is a confusing topic. If one hires a Master Mechanic, the expectation is that the job will be performed with the highest level of expertise. When you throw the word “Master” in front of “Black Belt”, shouldn’t we expect some level of expertise in the toolbox drawer labeled “Statistics”? Or should we label that drawer “Statistical tools, do not open without a degreed statistician present”?
If you are in IKE’s path….stay safe.
HACL0September 13, 2008 at 11:41 am #175736I would make sure the pretty charts are included with the excel template. (The interaction plot and XBar / R plots are pretty useful). The Hooded Claw might want to use the excel template using ANOVA, unless he or she was actually doing a take home exam.
0September 12, 2008 at 10:30 am #175705Are you sure you aren’t doing a take-home exam or something? It sounds crazy that you are not allowed to use Minitab.
My suggestions:
Use the ANOVA method, Use Minitab or JMP, use 3 operators0September 11, 2008 at 11:15 pm #1756973) Fire the idiot who hired the dufus
0September 11, 2008 at 4:04 pm #175660Try using one language while asking your question! :-)
0September 2, 2008 at 9:49 am #175372We just did one and used 5 torque wrenches of the same nominal value, 3 operators, and 3 replicates. In the GRR platform, substitute “torque wrench” for “part”. Our wrenches have a +/- 6% tolerance so we translated this to in-lbs and plugged this in as the upper minus lower in Minitab. Our R&R / Tol was horrendous. We did a what-if, and used a wider tolerance based on drawing requirements for the screws we are torquing. Even using this tolerance, the R&R/Tol was real bad. We have a new torque analyzer on order.
0August 31, 2008 at 11:43 am #175333These are 2 indices for estimating process capability from historical data. The tricky part is how to estimate the process standard deviation.
Cpk uses only the within subgroup method of estimating the standard deviation
Ppk uses within and between variation to estimate standard deviation. (Drive your process to a stable condition, and use this to show your process capability). Even a stable process will show movement of the mean, and why would you not want to encompass all the variation when you estimate standard deviation for process capability reporting?
I personally think the short term / long term mumbo-jumbo just confuses things. These indices are simply based on two methods for estimating standard deviation from historical data.
Conceptually the “within” subgroup method makes sense for control charts, which will make them sensitive enough to detect assignable causes. I am sure that Shewhart would be glad that I agree with him (ha ha).
0August 31, 2008 at 10:57 am #175332Your process mean is at least 6 standard deviations from the nearest spec limit. It could be 6 std dev from one spec limit and 100 std dev from the other….depending on the location of the mean, and the magnitude of the std dev.
0August 29, 2008 at 9:56 am #175308Venugopal G
I see alot of good discussion about measuring OEE and similar indicators. I still feel it is important to point out that GRR is simply the wrong tool to quantify real versus reported values. If that is truly what you want, then you are looking at Bias, not R&R.
For bias, you have to somehow know the true value of down times and compare with reported. As another poster indicated, you should probably set up camp and conduct a time study.0August 27, 2008 at 10:14 am #175217How would you ever replicate a measurement? GRR does not seem possible in this case.
0August 26, 2008 at 10:14 am #175182Did the initial question ever get answered? We are embarking on a similar endeavor.
The plan at this point is to try an abbreviated version with 5 parts, 3 operators, and 3 replicates. “Parts” in this case are 5 different torque wrenches. The objective is to study our torque analyzer, not the torque wrenches themselves. If the torque wrench has a specification of 16 +/- 2, we will calculate the precision to tolerance ratio with a tolerance of 4 units.
Insights? This was a little tricky to plan. The torque analyzer has its own spec, the torque wrenches have theirs, and the product that the wrenches are used on also has a specification on the degree of torquing required.0June 4, 2008 at 5:03 pm #172550Dan,
I assume that you are trying to determine if the means of the 3 groups are “equal” or if “at least 1 is not equal to the others”. This is usually the case with one way anova with 3 or more treatments. (or should I say..always the case)
Your observation about the 3X variance within one of the groups is a significant finding, and you should be delving into the root cause of this finding! The statement about the p-value seems inconsistent. Was this related to the test for equal variances? (if so, you would fail to reject the null…hence the variances would not be deemed to be different)
You might want to explain the basis for your derived p-value. Did you ignore the unqual variances and proceed with the F-test for comparison of means? If so, the inflated variance within the one group is probably the cause for the high p-value. (SS within is very large ….F ratio is low….p value is high)
It would be helpful if you explained how the 3 groups were formed. Is this a shift to shift study (first, second, third) or something along these lines?
Bottom line..forget about making the statistical test work and find out the cause of the variation.
HACL’s 2 cent for the day (That’s all I can afford after going to the gas pump)0May 9, 2008 at 2:08 pm #171861R&R / Study variation and R&R / tolerance are 2 different things.
Is the 7.6% value your P/T ratio? (R&R / Tolerance)
Need a little more detail in your question0 -
AuthorPosts