# Regression analysis

Six Sigma – iSixSigma › Forums › General Forums › Tools & Templates › Regression analysis

This topic contains 4 replies, has 3 voices, and was last updated by bbusa 9 years, 4 months ago.

- AuthorPosts
- June 12, 2010 at 6:18 am #53484

van HeldenParticipant@joppie**Include @joppie in your post and this person will**

be notified via email.I am working on a manufacturing problem were we are producing off-spec material. Besides a MSA I am also looking at the calibration curves of the analyser. The lab was working with a fairly old calibration curve. I worked with one of the analyst and prepared a new calibration curve over the full range of measurements (simular range as to the old one). Graphically I can see a small difference between the curves. But what statistical test can I use to compare the 2 curves?

Thanks

Joppie0June 14, 2010 at 7:17 pm #190337

PaulonisParticipant@paulonis**Include @paulonis in your post and this person will**

be notified via email.If you have a parametric calibration (specific linear or nonlinear equation with parameters), there are a few ways to proceed. A simple option that is better than guessing is described here:

http://www.psy.surrey.ac.uk/cfs/p5.htm

In order to use this method, you need to know the parameter values and standard errors of the parameter values for the old and new calibration.

If you have access to the old and new data that comprise the calibration sets, you can be more rigorous by performing a calibration regression with all the data and including dummy variables to test significance of the old and new data with respect to each of the calibration parameters.

See this example:

http://www.ats.ucla.edu/stat/spss/faq/compreg2.htm

In the example, the dummy variables were used to test male/female effects. In your case, the dummy variables would be old/new effects.

If you have a non-parametric calibration (e.g. a spectrometer calibrated at 50 wavelengths), I don’t think the above methods would be of value.

0June 15, 2010 at 10:53 am #190343

van HeldenParticipant@joppie**Include @joppie in your post and this person will**

be notified via email.Thanks for the link Paulonis. Interesting article. But the issue I am facing is slightly different.

Let’s asume I have 2 lineair calibartion curves (Y = aX + b) and both have the same regression coefficient (a), but there is a shift in the data: the factors (b) are different. And assume there is some error (variation) so R^2 = 0.8. Which test should I use?

Now, in reality the curves best fit is a 2nd order polynom. Would that change the approach?

Thanks

Joppie0June 15, 2010 at 12:14 pm #190344

PaulonisParticipant@paulonis**Include @paulonis in your post and this person will**

be notified via email.The issue you are facing is exactly the one the analysis methods in the links address.

The first method could be applied to either the slope or intercept parameters of the equation. To test the hypothesis that the intercepts are equal, compute the Z-statistic exactly as shown in the link (it even uses b in the equation, how coincidental). You will need both the intercept values and the standard errors of the intercept from both the old and new calibrations to do this. Look up the p-value corresponding to the Z-statistic in a Z table. If the p-value is sufficiently low (say below 0.05), you can reject the null hypothesis and conclude that the intercepts are not equal.

The fact that you have error in the calibrations (R2=0.8) is the expected case and the overall error of the fit is reflected in the standard error of the parameters in the linear model. This variation information is used directly in the statistical test.

This method is equally applicable to the 2nd order polynomial as well. However, as is described in the link, this method is going to break down as the number of parameters increases, as the assumption that other parameters are constant between old and new is more likely to be violated.

The second method would work perfectly in the case of a linear or quadratic calibration. In your example to check the intercept of a linear fit, you would use a dummy variable in the regression that was equal to 1 for new data and 0 for old data. If the dummy variable proved to be a significant effect in the regression, then the intercepts would be statistically different as represented by the p-value of the dummy variable. The mean difference in intercept between new and old would be the coefficient of the dummy variable.

0June 16, 2010 at 11:06 am #190348One method is to compare the Linearity of the Old System & New System.

Linearity is a measure of the bias wrt to the operating range of the gage0 - AuthorPosts

You must be logged in to reply to this topic.