Calculating Tolerance Interval After Johnson Transformation
Six Sigma – iSixSigma › Forums › General Forums › Tools & Templates › Calculating Tolerance Interval After Johnson Transformation
- This topic has 11 replies, 5 voices, and was last updated 4 years ago by
John Noguera.
- AuthorPosts
- February 25, 2017 at 7:31 am #55601
Alan JParticipant@alan.jacobInclude @alan.jacob in your post and this person will
be notified via email.I have a set of data which is not normal ,using the distribution identification option in minitab I found that Johnson’s transformation brings the P – value to approx 0.88 .
When I apply tolerance interval option in minitab to this transformed data , I get the tolerance interval in terms of the transformed value.
I took the formula used for the Johnson’s transformation from minitab and calculated the inverse to get the tolerance interval in terms of the original data.
But the original tolerance interval I got is not equally above and below the mean of the original data. Why is this ? Is this methodology wrong? am I missing something ?
Transformed tolerance intervals :- 3.603 to 3.559
Inverse of transformed intervals :- 3.91623 to 10.5845
original mean :6.71460February 25, 2017 at 8:57 am #200678
MBBinWIParticipant@MBBinWIInclude @MBBinWI in your post and this person will
be notified via email.@mparet – do you want to tackle this?
0February 25, 2017 at 10:10 pm #200680
Alan JParticipant@alan.jacobInclude @alan.jacob in your post and this person will
be notified via email.Anybody please help .
0February 27, 2017 at 2:26 am #200692
Mike BonniceParticipant@mbonniceInclude @mbonnice in your post and this person will
be notified via email.The transformation altered the data so that the higher values were further away from the mean in order to fit in the tail of a normal distribution, but the lower values did not need much alteration to fit a normal distribution. This means for equal probability to be outside the upper spec limit the upper spec limit needs to be further from the mean than the lower spec limit is.
0February 27, 2017 at 7:27 am #200717
GGGuest@Martin-GibsonInclude @Martin-Gibson in your post and this person will
be notified via email.How did you calculate the inverse function of the Johnson transformation? I’m curious.
Thanks,
GG0February 27, 2017 at 7:49 am #200718
Alan JParticipant@alan.jacobInclude @alan.jacob in your post and this person will
be notified via email.@mbonnice
I understood the first part ,can you explain the second part a little more if you don’t mind .
@Martin-Gibson
I used the transformation formula displayed in the transformation summary in Minitab and equated it to the calculated Tolerence limit(Transformed limit) and solved it in this site .
http://m.wolframalpha.com0February 27, 2017 at 8:26 am #200723
GGGuest@Martin-GibsonInclude @Martin-Gibson in your post and this person will
be notified via email.Think of powers and indices. If your transformation is to square (power 2) the inverse is square root (power 0.5). As far as I am aware there is no inverse transformation to the Johnson transformation. By the way I never use the Johnson transformation because we need to know the engineering (or other reasons) for non-normality. Not all data is normal and neither should it be.
0February 27, 2017 at 10:29 am #200727
Alan JParticipant@alan.jacobInclude @alan.jacob in your post and this person will
be notified via email.@Martin-Gibson
Transformation is done through a formula right like Like Laplace .(that’s what I thought ).
In the attachment I added in my original question you can see a formula at the end .Yes I agree with you but.
Since most of the tools used are done assuming normality ,then how do we use tools like capability analysis if you don’t transform or fit another distribution .Also in the original probability plot did you notice that the plot goes out of normality due to few extreme values . In such a case if those extreme values can be concluded as a part of inherent variation,then can we treat it as an approx normal distribution ?
0February 27, 2017 at 10:50 am #200731
Robert ButlerParticipant@rbutlerInclude @rbutler in your post and this person will
be notified via email.@alan.jacob your statement “Since most of the tools used are done assuming normality ,then how do we use tools like capability analysis if you don’t transform or fit another distribution.” is the result of the kind of boilerplate training that is necessary if you only have a very short time to give someone some ability to deal with statistical issues. However, in the broader world of statistics it is in error.
It is true that the standard capability calculations do require normality, however, there are alternate rules that allow you to compute capability when the data is non-normal or even when it is attribute. You should use inter-library loan and borrow a copy of Bothe’s book Measuring Process Capability. Chapter 8 – “Measuring Capability for Non-Normal Variable Data” has the answers you seek.
As for other tools such as t-tests, ANOVA, regression, control charts, etc. The issue of non-normality …. isn’t. The t-test is very robust to non-normal data as is ANOVA. With ANOVA one does have to worry about heteroscedasticity (unequal variability) between categories but there are workarounds for that issue as well. In the case of regression – the assumption of approximate normality applies only to the residuals – it has nothing to do with the X’s or the Y’s. As for control charts – check page 76 of Understanding Statistical Process Control 2nd Edition – Wheeler and Chambers – section 4.4 – Myths About Shewhart’s Charts – Myth #1:It has been said that the data must be normally distributed before they can be placed upon a control chart.
0February 27, 2017 at 11:07 am #200732
Alan JParticipant@alan.jacobInclude @alan.jacob in your post and this person will
be notified via email.Thank you @rbutler for your guidance ,really appreciate it .
Please pardon my amateur doubts .0February 27, 2017 at 9:03 pm #200736
John NogueraParticipant@jnogueraInclude @jnoguera in your post and this person will
be notified via email.@Alan.jacob, the problem with using a Johnson Transformation in a Tolerance Interval is that you have uncertainty in all 4 parameters. This uncertainty will not be accounted for in the 95% of the Normal Exact TI.
I recommend distribution fitting and Monte Carlo simulation to compute the TI’s, or if that is not feasible, use the VCOV percentile confidence intervals given in the distribution fit report. (Since you are using Minitab that would be in the Reliability/Survival > Distribution Analysis Right Censoring). The disadvantage of VCOV TI’s is that you can only get one-sided, whereas simulation will give you 2-sided. See for example:
Yuan, M., Hong, Y., Escobar, L.A., and Meeker, W.Q. (2016). “Two-sided tolerance intervals for members of the (log)-location-scale family of distributions”, Quality Technology & Quantitative Management, DOI: 10.1080/16843703.2016.1226594.
0February 28, 2017 at 4:14 am #200737
John NogueraParticipant@jnogueraInclude @jnoguera in your post and this person will
be notified via email.I should add that you can estimate 2 sided TI’s using the VCOV Percentile CI method using two one sided intervals: alpha upper = 0.025, percentile upper = 99.95; alpha lower = .025, percentile lower 0.05 (using your example). This will result in conservative TIs, but at least gives you an estimate.
On the other hand, Yuan’s simulation method will be Monte Carlo exact, so using 1e5 replications will give you very accurate Tolerance Intervals for the specified distribution.
0 - AuthorPosts
You must be logged in to reply to this topic.