iSixSigma

when p-value is close to .05

Six Sigma – iSixSigma Forums Old Forums General when p-value is close to .05

Viewing 36 posts - 1 through 36 (of 36 total)
  • Author
    Posts
  • #36479

    Alifa
    Participant

    Hi,
    I have a doubt what should be our decision in case:
    i . 0.05< p-value <0.10 and
    ii .01< p-value < 0.05
    in both the cases my sample size let say is 100 nos.
    regards
     

    0
    #105342

    Dr. Reiner Hutwelker
    Participant

    This is a matter of the objective of your investigation. If I want – with a low budget – have a first look into the data with a small sample size I do accept a bigger alpha -error and as a tradeoff a smaller beta-error. So I accept  0,5 < p < 0,1. I want to "see" every possible cause (x)  of my result (Y). If the result is significant at this level I try again with a bigger investigation. If, on the other hand, I want to be shure, that the new expensive machine, we are trying out, is really better than the old one, I adjust my alpha level on 0,01, i.e. p<0,01. In this situation I accept a bigger beta-error for I want to see a really significant improvement. But please keep in mind, that significance has nothing to do with the power of your effect. With a big sample size you might allways get a significant result. But the improvement is so small, that nobody cares.
    This is, how I explain the topic to my trainees. I hope that it is also helpful for you.
    regards,
    Reiner Hutwelker 
           

    0
    #105343

    Preeti S
    Participant

    Dear alifa,
    If your P value is .05 like u mentioned p vale as .10 than we fail to reject null hypothesis .
    Hope this helps

    0
    #105345

    Robert Butler
    Participant

    The following post may be of some help.
    https://www.isixsigma.com/forum/showmessage.asp?messageID=19089

    0
    #105367

    KBailey
    Participant

    Alifa,
    Don’t limit yourself to the other answers you’ve gotten. If your p-value is borderline, there are other things to look at before making your determination.

    Look for sources of variation that should be included in your model.
    Tighten up process control. Make sure everyone’s following SOPs and that those SOPs are adequate.
    If you didn’t already, make sure to check for interactions and non-linear relationships.
    Good look!
    k
     
     

    0
    #105961

    Deependra
    Participant

    I carry the same opinion as of Preeti , If P value is <0.05 null hypothesis is to be rejected , if it is a border line case may be test can be performed again by increasing the sample size ….
     
    Hope this helps
     

    0
    #105962

    Vikas Nagpal
    Member

    Alifa,
    When p value is greater than  .05 we accept Null Hypothesis.When p value is less than .05 we accept alternate hypothesis.When p valueis close to .05 we generally accept null hypothesis but it depends upon how critical is the process.
     

    0
    #105966

    Brains
    Participant

    Guys, just remember this.
    If the p-value is <0.05, say 0.04… this means that there's only a 4% chance of the null hypothesis happening. On the other hand, there's a 96% chance of the alternate hypothesis happening.
    And, in 6 Sigma, we never say that we accept the null hypothesis. We say that we don’t have enough evidence to reject the alternate, therefore we must declare the null hypothesis.
    Hope this makes it clearer.
     

    0
    #105970

    Welwisher
    Member

    Dear Vikas Nagpal,
    In reality when you do Hypothesis testing you never accept a NULL HYPOTHESIS.  We either reject the null hypothesis or Fail to reject the Null hypothesis. Please ensure correct information when you share in these forums or else wait for experts to comment
    Secondly – p value is only a statistical guide for you to look at the next step in decision making. I like the explanation given by one of the respondents on what to do practically based on p value.
     

    0
    #105971

    Jordan
    Member

    Quote of George Box in “Statistics as a catalyst to scientific learning by scientific method” in journal of Quality Technology january 1999
    “The concept of hypothesis testing at accepted fixed significance levels are, so far as they can be justified at all, designed for terminal testing on past data of a believable null hypothesis. This makes litlle sense in a context of exploratory inquiry. We should not be afraid of discovering something. If I know with only 50% probability that there is a crock of gold behind the next tree, should I not go and look”
    In the inquiry phase, you can sometimes keep factors with p values of 0.2 but when you put a process in production a buy a costly tool, the p value shoud be much lower than 0.05
    Yves
     
    Yves

    0
    #105974

    Damodaran
    Member

    Hi
     If  P value is >0.05 then accept Null Hypothesis and P<0.05 reject Null Hypotheses.
    If p value is near the Border,then experiment need to conducted with larger sample size if possible ,and followed by Root cause analysis.
    Sanjay Yatgiri

    0
    #105977

    Pankaj Bansod
    Participant

    To accept the null hypothesis, the p value should be <=0.05
    otherwise reject the hypothesis.
     
    Regards,
    Pankaj Bansod
     

    0
    #105979

    Gary A. Gack
    Participant

    Depends on what is at stake – p is just a measure of risk – if life is at stake you will certainly reject higher p – if you risk $1.50, higher p is ok

    0
    #105980

    Matt M
    Participant

    It depends on the risk you are willing to take.  Why did you set your limit at .05? Is there high dollar value or safety issues involved?
    I normally set it at 0.1 and if I’m close I look at the data, possibly run another sample of larger size, determine if there are outliers, etc.
    Don’t limit your options! Take a chance, I’ve actually had great success with 50% confidence levels, as our MBB said if you’re not willing to gamble you’ll never win anything.

    0
    #105983

    “Ken”
    Participant

    Yves and Gary – very good answers!
    We must base our decisions on what the statistics and data tell us but we should never let numerical values set limitations on our rational thinking.
    Suppose we were looking at four factors and the resulting P-values were .21 , .18, .15,. .06.  According to the book we would say no significance as we did not reach .05 or less on any factor.  With this large spread in P values the probability is high that something is going on with the factor producing the .06.  Look over the data and testing procedure again and re-investagate.
    Don’t overlook the pot of gold behind the tree just because the tree did not carry the correct number.

    0
    #105984

    Sigmordial
    Member

    Hi Dr. Hutwelker,
    There may be a possibility for a misunderstanding with your statement: But please keep in mind, that significance has nothing to do with the power of your effect. That is true, but the use of significance and power could provide one to conclude that there is no relationship between the alpha and beta risks.
    The stated alpha does impact beta (and hence the power), though not as strong as sample size.  It is an interesting exercise — if you have Minitab, keep the same sample size and stated effect, and see what happens to the power as you vary the alpha.  Then, keep alpha fixed and vary sample size.

    0
    #105988

    Luis Javier Rivas
    Participant

    Good responses..
    Also you can see the box-plots of both samples for dispersion of data. Consider the Cpk and verify what is your objective in decision making. What is critical to satisfy for your customer?

    0
    #105992

    Alfonso Villalobos
    Participant

    In my opinion it’s more matter of criteria on your behalf.
    Remember that p-value it’s  a way of saying the probability of Ho being true… so.. common alpha values are 0.05 and 0.01 which only reference from what point do you consider to reject or not reject Ho.
    I think you should use your common sense and process expertisse to make this kind of desicion.
    But, as i saw on some postings i agree that a good idea is to sample some to make a better desicion
    Hope this helps….

    0
    #106028

    atul singh
    Participant

    “”To correct Mr.Pankaj Bansod””
    dear Pankaj,
    we never accept the null hypothesis,we fail to reject the null hypothesis
    regards,
    atul

    0
    #106048

    Shrinivas Pote
    Member

    If P>0.05 – With 95% Confidence Level we do not have sufficient evidence to reject Null Hypothesis Hence we fail to reject Null Hpothesis.
    If P<0.05 – With 95% Confidence Level we have sufficient evidence to accept Alternative Hypothesis Hence we reject Null Hypothesis.
    Hope this will make clear.
    Pote.

    0
    #106060

    Ronald
    Participant

    I have always taught this in terms of cost benefit:
    1. If the cost of a wrong decision is high and the gain is low in comparison to the cost, you probably want 95%-99% confidence.
    i.e. Gambling $500,000 dollars to win $1000- you want to know you have a 95-99% chance of winning.
    2. If the cost of a wrong decision is low and the gain is high, you can use 75% or even lower.
    Gambling $1000 to win $500,000- if you lose every now and then no big deal because the posibble improvements are worth the risk.
    3. If the cost and the gain are comparable, I typically stay at the 95% level to ensure buy-in and acceptablity.  Although I could look to 90% if the cost of further sampling is dramatically high.
    The hard and fast rule of “p<0.05" is dangerous.  What is needed is to always combined a financial and risk analysis of why a certain confidence level and sampling size was picked.  As a project sponsor asked to commit resources (people/time/money), I would be foolhardy to just accept a "textbook" answer that 95% Confidence is the accepted method for hypothesis testing.

    0
    #106062

    TN Goh
    Member

    What a thing to say!!!  50%?

    0
    #106082

    vineeth kumar
    Member

    Hii..
    Regarding the Sample Size – It should be Enough for Statistical Analysis…Minimum 30 nos is Recommended..
    and before Doing the Hypothesis Testing , U need to Do a Normality Test for the Data.( I am Assuming Data is continuous Data )
    if  data is normal  then you can Do the  Hypothesis test.
    1.If  P value is < 0.05 mean , Reject Null Hypothesis..Both the Samples are significantly Different..
    2. If  P value > 0.05 then Do not Reject  Null Hypothesis..you need to Do some Further Experimentation and Analysis to Conclude that..
    this is because Experimental Error or Measurement System Error can also cause the varation in your measurement..So  Check that First Fix it and Then Check the P value Again..
    In repeated cases if you are getting  P value > 0.05 then U can say with 95% confidence  that Statistically these Samples not significant.
    In My opinion in Hypothesis Testing only 2 cases exists 0.05
    So dont get confused between 0.05 and 0.1 !!!!
     I think I have clarified you
    have a nice day
    vineeth
     
     

    0
    #106083

    Preeti S
    Participant

    Vineeth,
     
    Thanks for the reply , however I would like to add a little here if my p value is =.05 which is quiet possible in many situations than I would take more sample than perform the test again and if my p value is < .05 than again I will break the data and perform the test to see any significant difference between ( eg my bottom performers and than taking a sample out of it ) …………………
    Cheers
    Preeti S

    0
    #106086

    aBBinMN
    Participant

    Alifa,
    One poster was exactly right about things to look for. Most of the others are partially right in that you need to consider risk and cost in deciding how to proceed. However, they’re missing the significance of a p-value that doesn’t give you the confidence you’re looking for.
    The reason your p-value is higher than you want is either because your model is inadequate/incorrect (missing factors or interactions) or because of variation in factors that are included in the model (inadequate controls) or both. You also need to consider whether you’ll be able to achieve adequate process control based on your model. If there’s a significant interaction or source of variation that you don’t know about or that you’re not controlling, you might correctly reject the null hypothesis but still end up with a process that isn’t capable.
    A p-value of .1, .15, or .2 indicates it’s probably important to control those factors. It also means that you’re not very confident that you can control the process adequately by controlling just those factors. If you can operate well within customer specs, great! Otherwise, keep looking for the other significant factors/interactions.

    0
    #106099

    Powers
    Participant

    Brains,
    I believe your post was mis-stated…
    It should say, ” … We say that we don’t have enough evidence to reject the null, therefore we must declare the null hypothesis.”

    0
    #106150

    Ram
    Participant

    Refer a good text book (e.g. Box and Hunter)..The replies given by Butler, Yves and Ken are right on the money.
    Blind statements such as  ” if p<5% reject the null " that we learned from school do more harm than good because they replace good logical thinking and exploration with pat canned statements.  Taking more samples is always a good idea albeit an expensive one.
    There is nothing holy about 5%  Do people know where the 5% come from? (Hint: From Tomatoes – by the person to whom the Box and Hunter book is dedicated to..)
    Don’t believe everything you read on the bulletin – including this posting.

    0
    #106166

    Mikel
    Member

    Good call. Everyone should read page 109 of BHH to get a good perspective of significance levels.
    Remember that it starts with risk and how big of a difference you want to see – that gives you sample size. If you don’t do this, the p value at any level has no meaning.

    0
    #106183

    KBailey
    Participant

    Ram,
    Good call on the “blind statements.” I’m not sure what everyone else is learning in school, but it’s always been my understanding that the p<5% rule is "a rule of thumb" based on 95% confidence being pretty good for most purposes. We need to remember that a useful generalization is a useful generalization, and part of our job is to recognize when it doesn't apply.

    0
    #106221

    Edwin Garro
    Participant

    Excellent question.
    This is the reason why we need the process owner really close to us during the statistical analysis. A p value close to 0.05 calls for the expertise of the process owner to define de importance of the variable under study in practical terms. After all that is why the analysis was conducted in the first place, to improve a process. Stastitical significance is only a step towards the real goal.
    E!

    0
    #106250

    Mikel
    Member

    wrong
     

    0
    #106252

    Mikel
    Participant

    Remember that it starts with risk and how big of a difference you want to see – that gives you sample size. If you don’t do this, the p value at any level has no meaning.
    Just ask the pharmaceutical companies about finding effects using a huge sample size.

    0
    #106289

    C K O
    Participant

    Alifa, you sure have gotten some great responses so far.  
    While each statistical test has an associated null hypothesis, as you know by now, the p-value is the probability that your sample could have been drawn from the population(s) being tested (or that a more improbable sample could be drawn) given the assumption that the null hypothesis is true. A p-value of .05, for example, indicates that you would have only a 5% chance of drawing the sample being tested if the null hypothesis was actually true. Null Hypothesis are typically statements of no difference or effect. A p-value close to zero signals that your null hypothesis is false, and typically that a difference is very likely to exist. Large p-values closer to 1 imply that there is no detectable difference for the sample size used. A p-value of 0.05 is a typical threshhold used in industry to evaluate the null hypothesis. In more critical industries (healthcare, etc.) a more stringent, lower p-value may be applied. More specifically, the p-value of a statistical significance test represents the probability of obtaining values of the test statistic that are equal to or greater in magnitude than the observed test statistic. To calculate a p-value, collect sample data and calculate the appropriate test statistic for the test you are performing. For example, t-statistic for testing means, Chi-Square or F statistic for testing variances etc. Using the theoretical distribution of the test statistic, find the area under the curve (for continuous variables) in the direction(s) of the alternative hypothesis using a look up table or integral calculus. In the case of discrete variables, simply add up the probabilities of events occurring in the direction(s) of the alternative hypothesis that occur at and beyond the observed test statistic value.

    0
    #106297

    Szentannai
    Member

    Hi Alifa,
    maybe this is already a bit redundant but I’d start from the interpretation of the p value which I understand to be the following:
    P is the probability to measure the value you actually got, in a population where there is no effect  (i.e. the null hypothesis is true and you got the said value by a fluke of sampling error.)
    So the question is – as many have posted already – of how much risk you are willing to take to commit for a direction that could in fact be a false alarm?
    Would you feel comfortable to go to your boss and ask him to spend X dollars when there is a 5% chance that you will be proven wrong later ? Would you feel comfortable if the chance of being wrong was 10%? 15%? I think this is the question (or a variant thereof) that you need to ask yourself. And of course it is much better to ask this question before you actually start the measurement.
    Regards
    Sandor
     
     

    0
    #106604

    Lomax
    Participant

    Hi Everybody. This is my first post. Correct me if I am wrong!
    The p value is merely a measure. In very simple terms – it would mean that, you are accepting or rejecting the hypothesis at (1-p)% confidence level.
    E.g. If the p value is between 0.01 and 0.05, it would mean that the decision has a confidence between 95% and 99%. This is the beauty of p value. Now, whether to go for p of 0.01,0.05,0.1,or any other value for that matter, will depend on the severity of the decision and robustness of the sample. A call on it has to be made by the project leader and project owner in cosultation with the affected parties. There are certain “thumb rules”, But, of course who knows your process better than yourself!!!

    0
    #169090

    Pankaj Bansod
    Participant

    Hi,
    Pankaj,
    This is Pankaj Bansod from Hyderabad.
    Please contact me with your mail and let’s have more discussion on Six sigma

    0
Viewing 36 posts - 1 through 36 (of 36 total)

The forum ‘General’ is closed to new topics and replies.