iSixSigma

Paired T-Test Confidence Intervals

Six Sigma – iSixSigma Forums Old Forums General Paired T-Test Confidence Intervals

Viewing 15 posts - 1 through 15 (of 15 total)
  • Author
    Posts
  • #49354

    IOMH
    Participant

    I have performed a paired t-test on 77 samples before and after a process step. The paired t-test result has rejected the null hypothesis (Ho was outside the 95% t-confidence interval with a p-value of 0.001).
     
    My question is: am I correct in being able to use the 95% t-confidence lower and upper bound values to quantitatively say how much of a difference the process may be affecting the metric?

    0
    #168618

    Mikel
    Member

    Are you sure its a paired t that you should be using?

    0
    #168619

    IOMH
    Participant

    Hi Stan – I believe so, because each part is first QC’d in-process while it is being manufactured to meet two specifications (metrics “A” & “B”), then continues through other process steps until the part is done (i.e. the final product), then this is when the final QC is performed. Each pair of data comes from both the in-process QC and the final QC on the same part, the only difference on the part is that the final QC is performed after the additional process steps.

    0
    #168620

    Mikel
    Member

    What is your null hypothesis?

    0
    #168621

    IOMH
    Participant

    My null hypothesis was that the T-test of mean difference = 0 (pulled from the Minitab session window).

    0
    #168622

    Mikel
    Member

    In plain English, your hypothesis is the process step has no effect
    on the two metrics?The answer to your original question is yes the confidence interval
    predicts the amount of difference the process is making.

    0
    #168623

    IOMH
    Participant

    Correct. We are investigating whether or not we can stop end inspection of the parts and just use the in-process QC results. Knowing how much the process impacts the results will help us determine if the maximum difference is very significant.
    Thanks for your help Stan!

    0
    #168625

    Sloan
    Participant

    Regarding your null and alternate hypotheses, are you only looking for a difference, or are you looking for a difference in a specific direction? I would presume that you want your difference to be in a specific direction, not simply “not equal to zero.”
    Therefore your null hypothesis might be “sample 1 is less than or = to sample 2.” and your alternate hypothesis would be sample 1 > sample 2 (or whatever makes sense for your process).

    0
    #168629

    t test
    Member

    Never really understood the need for a one-tail T test…Why not run the two-tailed test and simply note the obvious direction of difference? 

    0
    #168634

    Robert Butler
    Participant

      If direction matters then a one sided test will test the direction and it will provide a higher level of signficance than a two tailed test. For a small sample this could easily be the difference between detecting a signficant difference and not detecting one.
     Section 9.5 of An Introduction to Medical Statistics 3rd Edition – Bland has a good discussion of the issue.

    0
    #168638

    Sloan
    Participant

    Thank you Mr. Butler,
    My point in raising this question is that the original poster was suggesting using the results of these paired t-tests in place of inspection (which I agree with the idea). My concern then is that with a 2-tailed test it could be very easy for the process to show significance in the wrong direction and if the person reviewing the results wasn’t paying attention might miss that fact by only looking at the p-value. With a one tailed test, you build in some robustness so that you only see significance in the direction you are looking for.

    0
    #168642

    anon
    Participant

    I am confused somewhat by this. The alpha level (typically 0.05) denotes the amount of alpha risk you are willing to take. (i.e. saying something is significant, when it is not.) If you are using a 2 sided test then you split that alpha between the 2 tails so that basically each tail has 0.025 of that alpha level. This determines the critical value that on the low end () if the results are greater than this critical value then the thing you are testing is significant.
    When you do a one sided test then the whole alpha risk is just on that one side, so it would seem the critical value would change. For example if you are test > (rather than ) , then does the critical value at which you deem the results significant change?
    It seems that if you do a one sided test, then you would have a lower value at which to claim a difference than you would if you did a 2-sided test.
    Can someone explain if that is right or wrong.

    0
    #168647

    t test
    Member

    Thanks Robert!  But “providing a higher level of significance” is confusing to me….I thought the significance level (alpha) was a fixed value chosen by the investigator for tests of significance, not determined/provided by the test.  Could you briefly explain?  Thank you again!

    0
    #168650

    Robert Butler
    Participant

    T Test – anon’s earlier post summarizes the situation.
    For a two sided test the critical t statistic is
    t1-á/2(n-1) whereas for a one sided test the critical value is  t1-á(n-1)
     
    So for a two sided test at the 95% the critical t value in the limit is 1.96 whereas for a one sided test the critical value is 1.645 thus the value of the t statistic is smaller for the one sided than the two sided which means you wouldn’t need as large a value for your one sided test statistic before you declared a significant difference.
     

    0
    #169152

    Brad
    Participant

    As you know one to the key characterisitics of the Paired T test is the data pairing must be consistant throughout the test, unlike any other hypothesis test. So I would check that the entered data has remained in pairs this includes the samples order too. The test is based off comparing the difference for each paired measure. So you are correcty in terms of stating the absolute difference between measured samples.
    I would just check for typo’s or data entry issues if you have got an unexpected Ho.
    What may be interesting and may be beyond the bounds of your question is to plot the relationships between the measured samples and see if there are one or more of the 77 samples which you struggled to get the same measure. If you can explain them being there you may want to remove them from the data set and re-run to see if Ho still stands?

    0
Viewing 15 posts - 1 through 15 (of 15 total)

The forum ‘General’ is closed to new topics and replies.