other iSixSigma newsletter subscribers:
MONDAY, MAY 22, 2017
Font Size
Topic Comparing Regression Coefficients within One Sample

Comparing Regression Coefficients within One Sample

Home Forums Old Forums General Comparing Regression Coefficients within One Sample

This topic contains 10 replies, has 6 voices, and was last updated by Profile photo of Mary McNabb McNabb 9 years ago.

Viewing 11 posts - 1 through 11 (of 11 total)
  • Author
  • #130502

    I have two questions:
    1. If you have two measures that theoretically measure the same construct (i.e. IQ) and you want to compare which is a more powerful predictor of an outcome (i.e. Academic Achievement), would you create two separate regression equations or would you include both into one equation (multiple regression with two IVs)? My  concern is that if you include them as a set, you will most likely have a multicollinearity problem.
    2. Based on your answer to question one, how would you compare the regression coefficients? If you decided to use two separate equations, would you compare the standardized or unstandardized regression coefficients? All data come from one sample.
    Any help would be greatly appreciated,


    Just a wag, but you are dealing with two continuous independent variables (IQ and Academic Achievement).  Why not subgroup the data and run two seperate simple linear regressions with corresponding pearson coefficients and then compare? 
    Or use a variable Y (Academic Performance) and Discrete X (IQ01 and IQ02) to run a simple comparison of means test?
    Both answer your question of which is a more effective method, no?
    Good luck


    1. The two regressions should be run separately. You would expect high multicollinearity because they tap into the same construct. This is desirable from a multimethod/homotrait point of view , but could be fatal for the establishment of predictive validity.
    2. The preferred method is to compare the regression of the unstandardized regression coefficients. Reasons: the unstandardized regression is less affected by differences in the variances of the the independent measures (IQ … this should not be problem if their construct validity has been established). They are also less affected by differences in measurement errors of the dependent variable (Achievement test … which you would have to assess via the appropriate reliability measure: split-half, test-re-test etc.).
    I assume this is a question on a test in classical test theory. Good Luck!
    Compare my suggestions with Nunnally 1978. I don’t have it in front of me and memory can be quite deceiving.


    An alternative method that includes both measurement and structural relationships and avoids the pitfalls of multicollinearity etc. is to run a structural equation model. The two IQ variables are treated as latent variables and standardized regressions would go from the two IQ variables to the latent construct “achievement test”. This approach takes error variance, collinearity and standardization  of the coefficients into account as well as discriminant and concurrent validity. At the same time you could also assess the reliability and the impact of lack there of on the size of the regression by a follow up correction of attenuation due to lack of reliability. This is a more stringent approach than the classical approach that I assume underlies the way you phrased your questions.


    Thank you. Your response has been very helpful. I hope it is okay, but I have one more question! Do you know of any stats programs  that have a built in feature (not requiring knowledge of syntax) to test the differences between two unstandardized regression coefficients? Although I prefer SPSS, if necessary, I am familiar with SAS, but would probably need syntax to run such an analysis.
    Thanks again,


    You can do this by hand. Look up Fisher’s z transformation. In essence the estimats (r) are transformed into a z score and a confidence bound is calculated to see if the two values fall within this bound. If you have problem finding the formula, let me know.


    Thanks for your continued thoughtful responses. I located SPSS syntax to conduct such an an analysis. The problem is that this approach is comparing the standardized beta coefficients or r.


    If there is no significant difference in variance,which you can easily test for by using a simple F-test or Levine test if necessary, use the SPSS procedure that you located (It’s probably a dummy variable type of procedure). Cohen (1983) has the justification for using the standardized coefficients under the condition of equal variance as well as Benny & Kenny (1986).


    Great. Makes sense. Thank you for all your help!

    Profile photo of Psychometrician helpful links
    Psychometrician helpful links
    Reputation - 0
    Rank - Aluminum

    Just for future references, attached are links to an article that outlines the procedure used to compare two correlations, and a table that allows you to automatically calculate the r to z-transformation. One more suggestion: When you compare the regression equations of the two IQ tests, make sure to also review the difference of the IQ tests in the intercept. Good luck.



    I have two variables that measure the same construct (i.e. infarct size, 0-100%) and want to determine which is a more powerful predictor of infarct size: Q waves (0-10) or fQRS complexes (0-10); how do I compare the regression coefficients? 
    Specifically the regression coefficients are 1.21 vs 0.96 with the same R2 0.07.  All data comes from one sample and I would prefer to run the analysis in SPSS.

Viewing 11 posts - 1 through 11 (of 11 total)

The forum ‘General’ is closed to new topics and replies.

5S and Lean eBooks
GAGEpack for Quality Assurance
Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
Six Sigma Online Certification: White, Yellow, Green and Black Belt

Find the Perfect Six Sigma Job

Login Form