SATURDAY, FEBRUARY 24, 2018
Font Size

MBBinWI

  • a typical yield is defined as [1 – (defective/(total produced))]*100%

    therefore defective % is (# defective/total produced)*100 or alternatively (defective quantity/(defective quantity + good quantity)

    Careful about which yield you use…if the’re a rework loop you can get a mismeasure of good quantity. This is a first pass yield…[Read more]

  • PPM stands for parts per million. 1% for example is 100,000 defective per 1 million produced.

    PPM or % are equivalent. Maybe I’m missing something.

  • Fascinating perspective in this article.

    Thanks for the time to write this.

  • Ummmm, why not address the top defect. Don’t complicate it.

    My two cents.

  • You’ll have to provide more information about time and the nanofluids – are we talking multiple kinds of fluids whose levels can be changed independently of one another or are we talking multiple levels of a single fluid or is it something else?

    As for time are we talking taking measurements across time or are we talking about running something…[Read more]

  • looks like a measure of nonhomogeneity? fyi, it’s screaming an MSA must be done if you’re trying to conclude differences among samples.

  • Ok, so all you really had were 9 independent measures of your process and all you actually have for lots 9326893 and 9300991 is a single independent measure each.

    How were these samples taken?

    a. Did you take samples over time as the product moved out of the reactor vessel (in which case you would at least have a measure of top-to-bottom…[Read more]

  • It is not an issue of bothering. The issue was the perception that all you cared about was Cpk regardless of process issues.

    I guess everyone has 20-20 hindsight – I wish you had posted your process details earlier because it would have saved a lot of time.

    First – “cherry picking” is the act of choosing samples in order to favor a…[Read more]

  • I’m not trying to be mean spirited but you have given me the impression that you are not paying attention to any of the responses you have received for this thread and the thread you started on 28 January with the title “Cpk and Normality”. In that thread you were looking at a 30 sample draw for 9300991 which exhibited most of the features you…[Read more]

  • Basically this emphasizes “data driven” project selection/sustaining. Well written.

  • The 13.9 is the LSL. The screen output shows Minitab determined Lambda = 5 and so you see the transformed LSL is as posted earlier 13.9^5 or 13.9**5 or 13.9 to the 5th power….all the same.

  • If the opportunities are truly the value added ones, then DPMO is a fantastic tool to measure across services, product lines, etc. And since sigma level is well known, it’s great.

    ONLY problem is the question has to be asked–did they shift the number or do straight read from Z table. And….some tables are shifted already “being helpful”.

  • So, what you are saying is that your measurements can only be taken to the first decimal place. So,
    Given that the limitation on measurement precision is real,
    Given that the data is really representative of the process
    Given that you are not looking at lot selection

    then what the normal probability plot is telling you is that you have a…[Read more]

  • hmmm

    13.9**5 is 518888 which is on the graph–just an example of the lambda being 5 for your example.

    Anyways, process capability isn’t a simple matter. You should know if you have a good gage (which we’ve identified your precision may not be great) and do basic things like run/SPC charts for control, trends, etc.

    This has morphed into a…[Read more]

  • I bet if you recorded to another decimal place and had the precision in the gage the Anderson Normality test would pass–it’s a common reason for failure of this test.

    Some of the others may give an indication you can consider the distribution normal.

    Your graphs clearly show the transformational “transform” that’s done to the specs and raw…[Read more]

  • @rbutler Interdependence is quite the complicated “qualification” Heck many processes have semblances of some relationship between consecutive samples but I’d hate to say one shouldn’t use SPC just because, for example, the sampling of something in a blood stream has been “impacted” by previous events. I know you’re not saying such a thing.

  • Capability after a transform gets to be REEEAAAAL interesting and you don’t want to go there – I’m with @cseider – it may have failed a normality test but I would forget that and focus on moving the process to the left. Again I would question the lack of data near the USL (sorry I called it the LCL last time – rented fingers you know) because it…[Read more]

  • well, you should consider sharing the data–I have a suspicion. Did you do a capability six pack tool in Minitab or at minimum a histogram and run/SPC chart for the individual readings?

  • The data failed a normality test?

    It seems we’re being too “tool oriented” and the process needs shifting to the left.

  • You said you ran a box-cox transform and a Johnson transform that would suggest you need to show three graph – before, box-cox, and Johnson.

    The raw plot – the second graph, gives the impression that you have not been given all of the data from the process. You have an LCL and your process is slammed right up against it. Unless the LCL is some…[Read more]

  • Load More


5S and Lean eBooks

Find the Perfect Six Sigma Job

Six Sigma Statistical and Graphical Analysis with SigmaXL
Six Sigma Online Certification: White, Yellow, Green and Black Belt
Lean and Six Sigma Project Examples
GAGEpack for Quality Assurance
Six Sigma Online Certification: White, Yellow, Green and Black Belt

Login Form