iSixSigma

Johnson Transformation…..help

Six Sigma – iSixSigma Forums Old Forums General Johnson Transformation…..help

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
    Posts
  • #52651

    Sorour
    Participant

    Hi.
    I have a data set (n=15) which fails normality for Anderson-Darling as per our company procedures (we have to prove normality before we statistically analyse data). We have to transform it with either the Box-Cox transformation or Johnson Transformation. The Box-Cox fails to find a transformation but the Johnson transforms. If the data could not be transferred to normal data I could find a non normal distribution.
    There is an outlier in the data set when a box plot is constructed but all data points are in control ( I think there is less variation than expected.
    Is the Johnson Transformation very severe transformation when outliers present. Should we be using this transformation?
    Thanks in advance
    Paul
     

    0
    #185477

    Sigmordial
    Member

    The Johnson curves are a family of distributions based on the transformed normal.  They are quite flexible in that they can accommodate almost every combination of kurtosis and skewness.  However, view these curves as academic.  When fitting distributions, you are ‘listening’ to the first part of Voice of Process: Shape (with the rest being center, spread, and performance over time).  
     
    We should always balance what the best fit suggests with what makes sense in terms of the process and/or phenomenon that provides the data. In a rough sense, transforming voice of the process is tantamount to asserting that you do not agree with what the data suggests, so you then distort this voice into something that better matches an analytical reality that is more in alignment with your statistical analysis.
     
    2 of the 3 Johnson curves represent pretty dramatic transformations of the data… another thing to consider about transformations: once you have transformed the data, can you readily ‘un’-transform the data?  The Su curves involve arcsines and the Sb curves involve the natural log of Y/(1-Y) – pretty severe transformations.
     
    Rather than jumping straight to a transformation, you should ask why we are seeing this unusual skewness or kurtosis in the data – should we expect to see this from the process or is there some assignable cause? Heavy tails may indicate excessive out of control points that you should detect evaluating process performance over time using control charts.  Plateaukurtotic behavior may be due to sampling from several different processes/populations, resulting in a distribution that appears to be rather flat.

    0
    #185479

    Lee
    Participant

    1.  Good basic approach you use, that normality is checked before generating non-meaningful statistics.
    2.  I do not consider myself adequate t respond to the Johnson transformation aspect of tour query, but I can respond this way:  It is likely that you can find some sort of standard transformation that makes the data look normal … but the burning question is what is inherit in the process that would cause the data to fit the transformation selected?  In other words, if data is not normal, but the log transformation makes it normal but there is no process fundamental why the log should be present, then beware that the transformation is likely not valid.    Just a self-imposed rule I use, and it forces me to think about the process and the measurements.
    3.  You mention outliers … have you considered recording errors and special causes?   Perhaps there is another independent variable manifesting itself.
    4.  I presume you have weighed the costs/time in getting a larger data set.

    0
    #185482

    Ken Feldman
    Participant

    Paul,
    I think the other posters pretty much captured the intent.
    1. The rule of being normal before doing analysis is STUPID and not based on any statistical consideration. In fact it shows ignorance and who ever is in charge of this in your company should be forced to spend a week being lectured by Stan as punishment.
    2. 15 data points is hardly enough to do anything meaningful and the concept of normality is DUMB.
    3. Transforming data is the LAST thing you do. First is to better understand why it might not be normal. Things like time cannot be normally distributed by the very nature of there being a boundary of zero. If there is an outlier, deal with that. If you have mixed data then possibly some stratification will give you insight. In any case understand your data. In your case, 15 gives you nothing to do serious analytics with.
    4. You might also try nonparametric analysis if appropriate.
    5. If all else fails and you have tried the first two steps described above then you can transform. But, few people will actually understand what they are looking at since the data is now in units which are quite foreign compared to the original data.Protect yourself and don’t get forced into doing something stupid which seems to be the SOP for your company.

    0
    #185490

    Brar
    Participant

    Most of the time data would not follow Normal Distribution. You can do Johnson transformation on the data; but the transformation is complex. After doing the transformation; interpreting it is very difficult. My suggestion would be to go to Minitab, find out the distribution which it belongs to and then do corrosponding analysis using that distribution.
    Also do root cause for the data which is outlier. If you find a very special cause (not to be repeated again) for that; remove that and do analysis. If not; then you cannot remove the data for analysis. Also collect few more data points to see if you get more outliers. It you get those; it means that your process is not stable.

    0
    #185499

    Stanley110
    Member

    Hi Paul:
    Go to http://www.qualitydigest.com and look for these three very recent articles and read them in order.
    D. Wheeler, Do you have Leptokurtophobia?
    F. Breyfogle, Non-normal data: To Transform or Not to Transform.
    D. Wheeler, Transforming the data can be fatal to your analysis.
    F. Breyfogle, NOT Transformng the Data Can be Fatal to Your Analysis.
    Regards,
    Stanley110

    0
    #185502

    Mikel
    Member

    They kind of make Forrest look like a whining baby, don’t they?Kind of reminds me of the old Taguchi / Classical DOE battles. Those
    who engaged in debate did nothing. Those that followed either
    method, went and did work learned. I wonder if there is a message
    there somewhere?

    0
    #185529

    MM
    Participant

    Hi,
    I think Darth mentioned the correct way of approach as well as someone else who asked you to explore the definition of floppiness.
    No advice from me on this…
    But…I do have a question on transformation of data … why do we transform data OR in which situations should we transform data?
    If I should post this query in another thread…pls reply to this post of mine and I will immediately act on it.
    Thanks in advance.

    0
    #185533

    Ken Feldman
    Participant

    MM,
    I find few applications where I MUST transform data to do the proper analysis with the data.  Stratification and understanding the data usually solves the problem.  If not, then nonparametrics often do the trick.  Box Cox works a lot of times if I really need to transform.  So, here is an example whereby I would need to use a transformation, especially a Johnson.
    I am evaluating the purchase of a new machine.  I have collected data on the cycle time it takes to produce quality parts.  Since it is time, the data will be skewed.  I look for outliers and explore the data and it still remains significantly skewed.  The manufacturer has provided specs on the expected average cycle time I can expect.  They fail to provide a median cycle time.  Since the data is so skewed, the concept of comparing my skewed data against the expected average makes no sense.  Therefore, the use of a non parametric one sample sign test is excluded.  Assuming the mean and median to be the same won’t fly. I try transforming the data using Box Cox so that I can do a one sample t test to see if the machine is performing as promised.  Unfortunately, Box Cox doesn’t work.  If your Box Cox doesn’t work then you must use your Johnson.  Assuming that transforms the data, I can now carry out my hypothesis test.  Remember to also transform the spec using the same transformation formula.  I could likewise do some process capability analysis on the transformed data if I didn’t want to try and identify the distribution and then do a non normal process capability.  I would likely not share any of the above outputs with anyone since the units of measurement are strange values.  I would only describe my conclusions.  At the end, if I had to, I could backtransform things back into their original units.  Hope this helps.

    0
    #185534

    MM
    Participant

    Thanks Darth. It was a great example.
    Have a gr8 weekend.

    0
Viewing 10 posts - 1 through 10 (of 10 total)

The forum ‘General’ is closed to new topics and replies.