Home › Forums › Old Forums › General › Comparing Regression Coefficients within One Sample

This topic contains 10 replies, has 6 voices, and was last updated by McNabb 10 years, 1 month ago.

Viewing 11 posts - 1 through 11 (of 11 total)

- AuthorPosts
- July 24, 2007 at 2:16 pm #130502
Hello:

I have two questions:

1. If you have two measures that theoretically measure the same construct (i.e. IQ) and you want to compare which is a more powerful predictor of an outcome (i.e. Academic Achievement), would you create two separate regression equations or would you include both into one equation (multiple regression with two IVs)? My concern is that if you include them as a set, you will most likely have a multicollinearity problem.

2. Based on your answer to question one, how would you compare the regression coefficients? If you decided to use two separate equations, would you compare the standardized or unstandardized regression coefficients? All data come from one sample.

Any help would be greatly appreciated,

RyanJuly 24, 2007 at 4:46 pm #130511Just a wag, but you are dealing with two continuous independent variables (IQ and Academic Achievement). Why not subgroup the data and run two seperate simple linear regressions with corresponding pearson coefficients and then compare?

Or use a variable Y (Academic Performance) and Discrete X (IQ01 and IQ02) to run a simple comparison of means test?

Both answer your question of which is a more effective method, no?

Good luckJuly 24, 2007 at 7:48 pm #130519Ryan,

1. The two regressions should be run separately. You would expect high multicollinearity because they tap into the same construct. This is desirable from a multimethod/homotrait point of view , but could be fatal for the establishment of predictive validity.

2. The preferred method is to compare the regression of the unstandardized regression coefficients. Reasons: the unstandardized regression is less affected by differences in the variances of the the independent measures (IQ … this should not be problem if their construct validity has been established). They are also less affected by differences in measurement errors of the dependent variable (Achievement test … which you would have to assess via the appropriate reliability measure: split-half, test-re-test etc.).

I assume this is a question on a test in classical test theory. Good Luck!

Compare my suggestions with Nunnally 1978. I don’t have it in front of me and memory can be quite deceiving.July 24, 2007 at 8:11 pm #130520Ryan,

An alternative method that includes both measurement and structural relationships and avoids the pitfalls of multicollinearity etc. is to run a structural equation model. The two IQ variables are treated as latent variables and standardized regressions would go from the two IQ variables to the latent construct “achievement test”. This approach takes error variance, collinearity and standardization of the coefficients into account as well as discriminant and concurrent validity. At the same time you could also assess the reliability and the impact of lack there of on the size of the regression by a follow up correction of attenuation due to lack of reliability. This is a more stringent approach than the classical approach that I assume underlies the way you phrased your questions.July 25, 2007 at 2:15 am #130527Thank you. Your response has been very helpful. I hope it is okay, but I have one more question! Do you know of any stats programs that have a built in feature (not requiring knowledge of syntax) to test the differences between two unstandardized regression coefficients? Although I prefer SPSS, if necessary, I am familiar with SAS, but would probably need syntax to run such an analysis.

Thanks again,

RyanJuly 25, 2007 at 4:59 am #130528You can do this by hand. Look up Fisher’s z transformation. In essence the estimats (r) are transformed into a z score and a confidence bound is calculated to see if the two values fall within this bound. If you have problem finding the formula, let me know.

July 25, 2007 at 10:57 am #130534Thanks for your continued thoughtful responses. I located SPSS syntax to conduct such an an analysis. The problem is that this approach is comparing the standardized beta coefficients or r.

July 25, 2007 at 12:20 pm #130536Ryan,

If there is no significant difference in variance,which you can easily test for by using a simple F-test or Levine test if necessary, use the SPSS procedure that you located (It’s probably a dummy variable type of procedure). Cohen (1983) has the justification for using the standardized coefficients under the condition of equal variance as well as Benny & Kenny (1986).July 25, 2007 at 8:56 pm #130562Great. Makes sense. Thank you for all your help!

July 25, 2007 at 11:04 pm #130569Ryan,

Just for future references, attached are links to an article that outlines the procedure used to compare two correlations, and a table that allows you to automatically calculate the r to z-transformation. One more suggestion: When you compare the regression equations of the two IQ tests, make sure to also review the difference of the IQ tests in the intercept. Good luck.

http://www.uoregon.edu/~stevensj/MRA/correlat.pdf

http://faculty.vassar.edu/lowry/tabs.html

May 14, 2008 at 6:29 pm #145890I have two variables that measure the same construct (i.e. infarct size, 0-100%) and want to determine which is a more powerful predictor of infarct size: Q waves (0-10) or fQRS complexes (0-10); how do I compare the regression coefficients?

Specifically the regression coefficients are 1.21 vs 0.96 with the same R2 0.07. All data comes from one sample and I would prefer to run the analysis in SPSS.

Thanks! - AuthorPosts

Viewing 11 posts - 1 through 11 (of 11 total)

The forum ‘General’ is closed to new topics and replies.