Using Parts per Trillion Data as Continuous?
Six Sigma – iSixSigma › Forums › General Forums › Tools & Templates › Using Parts per Trillion Data as Continuous?
 This topic has 6 replies, 5 voices, and was last updated 1 year, 10 months ago by MBBinWI.

AuthorPosts

November 20, 2020 at 9:27 am #250955
Derek KozlowskiParticipant@derekkoz Include @derekkoz in your post and this person will
be notified via email.Hey folks, I am wrestling with an issue. We do a lot of measurements where our primary measure is ppb or ppt. I understand that this is actually a percentage, as the data is given by our test devices as a non integer (8.312, 10.480, etc.) However, showing this as a percentage make the data almost impossible to decipher, so it is generally accepted to be a number, rounded up or down to be count data.
The problem comes with trying to analyze this data for normality, capability, etc. Can any reasonable assumption be made to treat the data as continuous? I am almost sure the answer is no, but am having a hard time making the right decisions in our work sphere.
This is semiconductor business, and samples much be made by deposition, then impurities measured. It is extremely difficult to obtain samples to test, as they must be grown through deposition, which is a very slow and expensive process. I am trying to find a way that we can use this without gathering the required amount of data points form most attribute tools. 2030 data points is a considerable investment in both time and reactor use. We generally can’t get this in our experiments.
What can you offer me to help with this?
Thanks
0November 20, 2020 at 10:29 am #250959
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.The short answer to your question is – your concern is really of no concern.
1. From Agresti Categorical Data Analysis 2nd Edition page 3
” Variables are classified as continuous or discrete, according to the number of values they can take. Actual measurements of all variables occurs in a discrete manner, due to precision limitations in measuring instruments. The continuousdiscrete classification, in practice, distinguishes between variables that take lots of values and variables that take few values. For instance, statisticians often treat discrete interval variables having a large number of values (such as test scores) as continuous, using them in methods for continuous responses.”
…so, go ahead and treat your measures as continuous.
2. Standard calculations for capability do require the data to be approximately normally distributed. For those cases where this is not the case one needs to use the methods for calculating capability when the data is not normal. Chapter 8 “Measuring Capability for NonNormal Variable Data” in Bothe’s book Measuring Process Capability has the details.
1November 21, 2020 at 1:27 pm #250981This was very informative. Thanks for sharing this with us.
(https://nicciskincare.com/products/glowgangwatermelonfacetonermist)
0November 23, 2020 at 7:09 am #250990
Derek KozlowskiParticipant@derekkoz Include @derekkoz in your post and this person will
be notified via email.Thank you. This was my general feeling as well, but was unsure enough to seek greater wisdom! As always, it takes a village…
0November 23, 2020 at 8:56 am #250992
Robert ButlerParticipant@rbutler Include @rbutler in your post and this person will
be notified via email.…one additional thought. With measurements in parts per billion/trillion you are sure to run up against roundoff error in whatever analysis program you are using. This will be true even if you have (as most programs do today) double precision. I would recommend you express the measures in scientific notation, drop the 10’s power and run the analysis on what is left. At the end you can convert everything back to ppb or ppt.
0November 25, 2020 at 11:02 am #251032
Chris SeiderParticipant@cseider Include @cseider in your post and this person will
be notified via email.Sure you could.
Just look at your data–if it’s reported/recorded to the nearest 5 or 10 or 100…then you won’t have the precision potentially to use as continuous.
Don’t forget to do an MSA!
0November 26, 2020 at 9:37 am #251044
MBBinWIParticipant@MBBinWI Include @MBBinWI in your post and this person will
be notified via email.@derekkoz – as the learned @rbutler identifies, once you limit the number of decimal places you are going to use, you have created a discrete measure. The key is whether the number of decimals is sufficient to provide the resolution needed to answer the question being investigated. Now, don’t get me wrong, there are certainly absolute discrete measures, but even continuous measures are functionally discrete once you limit the number of decimals. The real question for you is to what level of precision do you need in order to answer the issues you are looking to understand.
I don’t know if there is any proof of this, but generally you should have one decimal greater than the number of significant places that you are looking to analyze. So, if you are trying to answer a question where the precision is to the 5th decimal place, and you are able to accurately and precisely measure to the 6th decimal, then you should be fine. If not, then you are going to have issues.
This level of precision is one that I have found in industry. At the outset of an improvement effort, a rather gross measurement scale is sufficient because the issues are rather large. As these are reduced, the measurements need to become more precise in order to discern the differences and make further improvements.
Good Luck.
0 
AuthorPosts
You must be logged in to reply to this topic.