ANOVA

Viewing 12 posts - 1 through 12 (of 12 total)
• Author
Posts
• #46064

Sridhar
Member

Hi,
When you use an One way ANOVA test to compare more than 2 data sets, then can we conclude the improvement based on the `P’ value? This is so because `P’ value compares only between 2 data sets?
Thanks
Sridhar

0
#151740

accrington
Participant

I don’ t understand your question. You don’t use One – Way ANOVA to compare two treatments and you don’t use p – values  to measure improvement.
What are trying use ANOVA for?

0
#151751

Darth
Participant

Looking at it another way.   You use ANOVA to test for differences in means for 2 or more populations…usually more than two but can be used for 2 instead of a t test.  If the p value calls for the null hypothesis to be rejected and the one that is different is “after” versus “before” and it is directionally correct, you might jump to a conclusion of “improvement”.  Do you agree?

0
#151766

Jim Shelor
Participant

Dear Sridhar,
When you perform a one way ANOVA, the null hypothesis is that all the means are equal whether you are testing 2 or 12 means together.
The alternate hypothesis is that at least one of the means is different, but the P value does not identify which one or ones are different.  When you are testing 2, you should use a T test because you get more information from a T test.
When using an ANOVA, I always have a box plot as one of the plots to be provided.  It is much easier for me to interpret the differences using a box plot than for me to determine the differences using the data and graphs provided in the ANOVA printout, especially if I am testing a large number of means.
Sincere regards,
Jim Shelor

0
#151768

accrington
Participant

I agree that you could use ANOVA instead of a t – test, but wouldn’t normally.
You could use a low p – value to conclude ‘improvement’. The problem is, you may be completely misleading yourself, if you do not understand the underlying process behaviour (again, the old enumerative methods applied to analytic problems debate), or have do not have a good understanding of the physical phenomenon being studied
A ‘statistically significant’ difference does not necessarily mean that the change is of any practical significance. It might just mean that you took a big sample.
On the other hand, if the test tells you not to reject the null hypothesis, you may be missing something important if you do not look at the magnitude of the effect. (throwing the scientific baby out with the statistical bathwater – biologists do it all the time, because of their reluctance to publish any work which is not statistically significant)

0
#151774

Jim Shelor
Participant

Dear Accrington,
When a Six Sigma professional runs an AVOVA, or any other statistical test for that matter, the problem is not over when the results of the statistical test are obtained.
The next step is to figure out why the difference, or no difference, exists.
For example, if I am looking for a difference to a 95% confidence and the P value comes up 0.4, that says at least one of the means is different. However it is only different to a 96% confidence.  Now the question is why and do I care.
On the other hand, if the P value comne up 0.6, the means are not different but only to a 94% confidence.  Whe question is why and do I care.
If the P value comes up 0.000, the means are significantly difference and the question is now only why, because in all probability I will care.
If the P value is 0.999, the means are not different by a large margin.  You still want to know why, but you are not going to spend an inordinate amount of time to find out since the indication of being the same is so large.
The point is, a Six Sigma professional does not “throw the baby out with the statistical bathwater”.  There are reasons for everything and the Six Sigma professional wants to know what those reasons are.
Some scientists may do what you are suggesting, but are they Six Sigma professionals or scientists?
Respectfully,
Jim Shelor

0
#151778

Darth
Participant

No argument there.

0
#151803

accrington
Participant

Hi Jim
Whether Six Sigma professional or scientist, both should be familiar with and be able to use scientific method. The problem with statistical tools taken out of the context of the phenomenon under study is that the wrong methods are often used, leading to inappropriate action.
A lot of the Six Sigma training materials that I have seen fail to distinguish between analytic and enumerative studies, and emphasise the use of hypothesis testing, without any reference to the underlying process, or the use of control charts to understand process behaviour.
The use of hypothesis tests on processes which operate over time is inappropriate.
Also, what is a Six Sigma “professional”? Most professions require several years advanced study followed by several years experience in the application of one’s chosen discipline. Is the the case with most Six Sigma professionals?
And, to my knowledge, R.A. Fisher wasn’t a Six Sigma professional

0
#151812

Jim Shelor
Participant

Dear Accrington,
On one point we agree.  Taking a statistical result at face value and acting on that result without evaluating the statistical result against how the process works is inappropriate.
The statical result should be evaluated for at lease two things:
1.  Does the answer make sense given how the process works.
2.  Do I care that there is a statistical difference exists between the parameters given the effect of the parameters on the process.
On the second point we disagree.  Say we have two product lines working in parallel and producing the same product.
One line appears to be making more product than the other.
Management wants to perform major maintenance and investigation into the lower performing line to make the lines operate more closely.
I run a T test that shows the two lines are not statistically different.
Should I let management spend a lot of money investigating the difference, or should I tell them (based on my hypothesis test) there is no significant difference between the output of the lines?
Running hypothesis tests on continuous running processes is appropriate.
Regarding the words Six Sigma professional, does is really make any difference that I said that?  Do you really care whether or not I used that term?  Do you consider yourself a professional at what you do based on your attitude and performance of your job?
If it really makes that much difference, I will stop using the word professional.
Best regards,
Jim Shelor

0
#151863

accrington
Participant

Running hypothesis tests on continuous process is NOT appropriate. And telling people to do this is bad advice. Read p. 131/ 132 of Out of the Crisis, for W.E. Deming’s view.
I do consider myself a professional at what I do, based not only on my attitude and performance in my job, but also on my education and experience (a lot more than 4 weeks training and two projects)

0
#151873

Mayes
Participant

I think your p values should be 0.04 and 0.06 insted of 0.4 and 0.6.

0
#151901

Jonathon Andell
Participant

There can be a small difference between ANOVA & t-test: a t-test can operate with a one-tailed alternate hypothesis, which sometimes is appropriate. If my dimming memory serves me right, ANOVA with 2 samples should yield the same p value as a two-tailed alternate hypothesis in a t-test. Please correct me if I am mistaken.
I totally agree with the discussion thread about understanding the process. I think it was Diamond who said that the objective of statistical analysis is to understand the process, not the data. I always encourage people to plot the data as many ways as possible, including control charts. I also like box plots, histograms, probability plots, etc. I rarely display more than one or two of those plots once I think I understand what the process is doing, but I frequently look at a wide variety of charts to gain the understanding.

0
Viewing 12 posts - 1 through 12 (of 12 total)

The forum ‘General’ is closed to new topics and replies.