iSixSigma

significance-misleading?

Six Sigma – iSixSigma Forums Old Forums General significance-misleading?

Viewing 18 posts - 1 through 18 (of 18 total)
  • Author
    Posts
  • #52767

    howe
    Participant

    I have a question regarding the accuracy as to whether data should not be interpreted simply because it is deemed not significant. The paragraph below is copied from “Linear Regression Making Sense of a Six Sigma Tool”. I am using it as an example to illustrate my confusion.
    “In this case, the p-value is 0.134. If alpha is set at 0.05, then one would have to reject this regression line as having a valid fit because p-value is greater than 0.05. This means that the model is not significant.Β The R-Sq value Β– though looking quite good Β– is of no value and should not be interpreted. Those who did this regression will need to collect more data, re-do the regression and then see whether the p-value is now significant before they interpret the R-Sq value.”
    My question relates to the statement Β“The R-Sq value Β– though looking quite good Β– is of no value and should not be interpreted.”  This seems inaccurate. That is, couldnΒ’t I conclude that there is a 13.4% chance that the regression line does not represent the data? CouldnΒ’t I conclude that it is significant if I arbitrarily selected alpha=0.14? If so, I feel the statement whether something is Β“significantΒ” is misleading. This is not a yes or no, go or no go condition. The p-value simply shows you the risk of concluding there is a difference when there really isnΒ’t one.
    Rather than drawing an arbitrary line in the sand (alpha risk) and ignoring the 13.4%, is it not accurate to conclude that there is an 86.6% chance that the regression line does explain the data? If so, there may be no need to collect more data as long as I am ok with 13.4% chance of being wrong.
    Any input/clarification would be much appreciated.
    Thank you

    0
    #186039

    Mikel
    Member

    You are correct.I personally set my p-values = .5

    0
    #186041

    Jered Horn
    Participant

    Please explain how you “set” your p-values?

    0
    #186042

    Darth
    Participant

    I think Stan got a little bit excited. You are correct in that you don’t set p values. They are the calculated alpha risk determined by the data and the test. Possibly Stan meant to say that he sets his acceptable risk at .5 and then compares the actual risk (p value) and the risk he is willing to tolerate for a type 1 error. I agree that the default of .05 we hear about is too strict and often prevents one from seeing something when it truly exists. Of course, good management of your Beta risk and Power needs to be considered as well.

    0
    #186043

    howe
    Participant

    P-values are not set, they are calculated. I did not say I set them.
    Alpha values are set based on whatever level of risk you can live with of being wrong with your conclusions.

    0
    #186044

    Darth
    Participant

    Easy big guy…the references to “setting p values” were directed towards Stan’s comment.

    0
    #186046

    Severino
    Participant

    The nice part about statistics is that you generally do not need to rely on numbers alone.Β  Why don’t you go ahead and plot that bad boy and see what it looks like?Β Β TheΒ p value isn’t evenΒ worth the number of decimal places it occupiesΒ on its own.Β 

    0
    #186057

    howe
    Participant

    Thank you for your input.
    Can you expand on what you mean when you say beta risk and power need to be considered as well? Can you provide a reference/example that shows how they should be considered when assessing whether a regression model accurately explains the relationship between X and Y?

    0
    #186059

    Szentannai
    Member

    Hi Mike,
    I think you can pick any alpha level you think you can live with, with the condition that you do it BEFORE you calculate your p.So, you might say, that based on the weight of the problem, you are prepared to accept a maximal risk of 10% of being wrong – THEN you calculate the p value and you stay with the H0 if p > 0.1. 0.1. 0.1.Doing it the other way round is called IIRC “data snooping” and is generally a quite dishonorable practice. It is equivalent to saying – “my regression line is so nice that I will accept a 15% risk, just to be able to keep it :)))”.Concernig the R-squared: if you decided to stay with the H0 (that is, you say that you see no evidence of a connection between X and Y) it would be non-sensical to say that the strength of the connection you do not believe that exists is say 83%.

    0
    #186060

    Szentannai
    Member

    Hi Mike,
    I think you can pick any alpha level you think you can live with, with the condition that you do it BEFORE you calculate your p.So, you might say, that based on the weight of the problem, you are prepared to accept a maximal risk of 10% of being wrong – THEN you calculate the p value and you stay with the H0 if p > 0.1. 0.1. 0.1.Doing it the other way round is called IIRC “data snooping” and is generally a quite dishonorable practice. It is equivalent to saying – “my regression line is so nice that I will accept a 15% risk, just to be able to keep it :)))”.Concernig the R-squared: if you decided to stay with the H0 (that is, you say that you see no evidence of a connection between X and Y) it would be non-sensical to say that the strength of the connection you do not believe that exists is say 83%.

    0
    #186061

    howe
    Participant

    Hello Sandor, I appreciateΒ your input.
    Your response exemplifies my confusion. Why does it matter when I decide what maximal risk I can live with? Does this change the valueΒ of the calculated alpha risk (p-value)? I do not wish to do anything “dishonorable” but I do not wantΒ to say a relationship does not exist at allΒ if I calculate a p-value that is higher than my subjective predefined alpha level I can live with.
    For example, say I plan to conduct a regression study. I say to myself before collecting any data that I can live with 5% chance of drawing the wrong conclusion. I then run the experiment, collect the data and then calculate the p-value. The p-value is calculated to be 0.051. Why should I rerun the experiment or collect more data until the p-value gets below 0.05? The benefit of avoiding being dishonorable and not data-snooping does not seem to outweigh the added cost/time of getting new/additional data.
    It seems to me that selecting the maximum amount of alpha risk we can live with is very subjective. The very fact the standard alpha level is 0.05 implies the subjectivity of this risk. Why would the standard amount of risk we can live with be the same? As much as rejecting or accepting the null hypothesis makes it black and white, the actual risk (p-value) is not 0 or 1, it varies between 0 and 1.
    Again, I value your input and further clarification would be appreciated:)

    0
    #186062

    Lee
    Participant

    Just my approach — for which I find nothing written —
    When I start to process data I first try to determine what the physical process is, because the fundamentals/physics behind that process shouldΒ reveal what the “real”/accepted variables (x’s) are.Β  In those cases, I am essentially banking my reputation as a process improvement person on the work of others that are more knowledgeable than I.Β  In essence I an taking the approach that I am very unlikely to be wrong on the regression equation.Β  Now, if the p values are >.05 or the r-squared values is not >.9 to .95 then I look for an additional variable in the data (like shift-to-shift differences, measurement accuracy, variables that are not well controlled, etc.) and improve them.
    When I can find little on the process fundamentals/physics and branch out on my own (quite often), then I will accept p values of up to around 0.2, and r-squared values of over 0.8.Β  If the process involves a lot of people determined x’s the I accept p values of around 0.5 (in a lot of bio-med work and social services work they are doing quite good if they get r-squared >0.5)
    The r-squared and p values I accept are largely determined by my guide: If I am wrong on the x’s, then the predictive capability is very poor.Β  Poor predictive values mean that I will loose face — i.e., the number of times I’m called upon to solve problems will drop.Β  To advance the processes here I need to have a high batting average.
    Β 

    0
    #186067

    Szentannai
    Member

    Hi Mike,
    this is indeed a difficult question and IMHO there are several distinct aspects.The idea of fixing the alpha level before the measurement is devised to avoid the “data snooping” fallacy. Ideally one should be able to say, what risk levels are acceptable to him/her in a project. So, if the p-value is “much” above the alpha level then the case should be clear, and changing the alpha level to achieve a significant result could be rightly frowned upon :)On the other hand, I think that the p-value is in the end an estimate,so it will necessarily have a confidence interval.This means that the values we see are not bettter then the point estimates of a mean for example. This also means that minute differences like 0.051 instead of 0.05 play no role at all.In the end I think the clean way of addressing this problem would be to go ahead and reject the Null if the p is close to the alpha, though bigger. (How close is “close” is a different question though.)If the p value is definitely greater then the alpha level – like alpha being 0.05 and p = 0.07 for instance AND it makes sense for the project I would re-negotiate the alpha level and take another sample, with the new alpha level fixed again in advance.The data snooping would be to use the same data set to re-negotiate the alpha level AND to prove that at that alpha level the connection is significant. If you can do it with independent samples, it would be OK IMHO.Regards
    Sandor

    0
    #186069

    Darth
    Participant

    Mike,I agree that there is nothing wrong with deciding on what you can live with until after the p value is calculated. There is nothing sacred about your acceptable level of alpha risk. This is a practical selection, not a statistical one. In reality, what’s the real difference between accepting at 5% or 7% or 10%? If the p value turns out to be .80 and in your mind you were looking for something around 10-15% then changing it to .80 would be silly.As for Beta and Power. If I fail to reject the null, it can be for one of two reasons…either there is no difference/change or I don’t have the Power to see it. Before getting too wrapped up in a high p value and failing to reject the null, you should double check Power which is defined as 1 minus your Beta risk. In reality, you should be selecting sample sizes with a sufficient and acceptable level of Power before you run your tests. The solution to low power is greater sample size.

    0
    #186071

    howe
    Participant

    Thank you Darth.
    In MiniTab, how do I determine adeqaute sample sizeΒ to determine a regression equation assuming a given beta risk?Β Β 

    0
    #186072

    Szentannai
    Member

    Hi Darth, I’m afraid that would be a mistake. It`s like shooting first and drawing the target around the holes afterwards. The point is that the acceptable risk level must be determined by the project environment, the height of the stakes in the project, psychology of the Belt, whatever BUT not the data set. There is nothing that speaks against fixing the risk level before the p calculation as all factors playing into it must be known beforehand. So, the only reason of fixing the alpha after the p gets calculated can only be to adapt the risk level to the measurement. That can not be a healthy policy over the long term IMHO.Regards Sandor

    0
    #186086

    Darth
    Participant

    Unfortunately, Mini doesn’t do Power and Sample Size calculations for Regression nor Nonparametrics. I downloaded this program and it seems pretty good for doing all kinds of study calculations. Take a look.http://www.studysize.com/download.htm

    0
    #186091

    Severino
    Participant

    The alpha risk is his.Β 

    0
Viewing 18 posts - 1 through 18 (of 18 total)

The forum ‘General’ is closed to new topics and replies.