iSixSigma

Power of Test – Need help

Six Sigma – iSixSigma Forums Old Forums General Power of Test – Need help

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
    Posts
  • #48748

    Marz
    Participant

    Dear All:
    Need help in understanding the power of test.
    In improve phase we did a pilot with one team in focus. Now I have 15 days of AHT data (Pilot Team AHT & Process AHT), with number of calls each day >300. Which means each day AHT is based on >300 calls.
    I cannot increase the number of days in the data set.
    Std dev is 1.06. Difference to be tested is 1 min.
    Minitab15 gives me power of 2 sample T test as 73.
    When I did the 2T test, I was able to statistically prove that the difference is >1min. Will there be an error in my judgement in this scenario?
    Please suggest.

    0
    #165253

    Ward
    Participant

    When you did the 2 Sample t-test, what was your p-value?

    0
    #165260

    Marz
    Participant

    Hi Pete:
    p value received is 0.000. Test result is given below.
    Two-Sample T-Test and CI: AHT-Sherwin Team (Pilot Phase), AHT-Process (Pilot Phase)
     
    Two-sample T for AHT-Sherwin Team (Pilot Phase) vs AHT-Process (Pilot Phase)
     
                               N   Mean  StDev  SE Mean
    AHT-Sherwin Team (Pilot   16  17.84   1.08     0.27
    AHT-Process (Pilot Phase  28  19.55   1.05     0.20
     
     
    Difference = mu (AHT-Sherwin Team (Pilot Phase)) – mu (AHT-Process (Pilot
         Phase))
    Estimate for difference:  -1.716
    95% upper bound for difference:  -1.157
    T-Test of difference = 1 (vs <): T-Value = -8.17  P-Value = 0.000  DF = 42
    Both use Pooled StDev = 1.0602
     

    0
    #165418

    Erik L
    Participant

    Kevin,
     
    A couple of items.  Power is the likelihood of detecting a difference if it’s really there.  The flipside of power, beta is (1-Power), or you could think of it as the percentage of times you are going to miss signals in the data.  Typical studies shoot for approximately 80% power (of course this varies based off of the criticality of committing a beta error).  
     
    When I went through your string of responses, I think I may have found an error.  If I follow the Minitab output, you are testing for a directional alternative vs a not equal alternative.  If I’m wrong about this, then the rest of my response is moot…
     
    This option has the benefit of making it easier to reject the null with a given sample size and actually makes your power 83%.  Re-running the analysis, based on the fact that there is an even larger delta than what you initially hypothesized translates to an actual power of your study at greater than 99%.  I think you can feel pretty confident that the significance detected is real and if you repeated the study again you would achieve a confirmation of the first result.
     
    Regards,
    Erik

    0
Viewing 4 posts - 1 through 4 (of 4 total)

The forum ‘General’ is closed to new topics and replies.