Pass/Fail Reliability Problem
- October 30, 2017 at 6:10 am #55852
Here is my problem and I would appreciate any advice/help.
My company makes electronic widgets. Over a 3 month period 3, 500 units were produced and 28 of these failed a particular end of line test (attribute). These were put in a reject pile for further investigation.
I wasn’t afforded time by my company to complete a full G R &R on the test. However
• Took 5x brand new first pass units from production. Locked down all the variables – used the same test person, the same test coins, same number of coins entered .Repeated 3 times per unit. Every unit passed each test
• Took 5x failed units from the bone pile. Locked down all the variables – used the same test person, the same test coins, same number of coins entered Repeated 3 times per unit. Every unit passed each test bar 1.
. I retested 28 units that had failed. 24 passed on retest and 4 failed.
My problem is engineering do not have the resource or inclination to investigate the repeat failures. Operations want to send passed retest units out to the customer. The question from quality is how do we know a pass stays passed and a fail stays failed.
Can anybody give me some pointers in reliability studies or statistical risks of passed and failed units.October 30, 2017 at 7:23 am #201919
You’ve got a real life problem and I don’t think pointers are necessarily what you need right now.
1. Do that Gauge R&R
2. Whats the consequence of a customer receiving a failing unit?
3. Can you sell the retest pass ones as “reconditioned” and offer replacement if they do fail?
How vital is this widget?
If your electronic widget keeps a plane in the sky you may have a problem with a failure being catastrophic. If your electronic widget keeps a toy beating a small drum not so catastrophic.
4. Don’t panic
5. Think about it.October 30, 2017 at 7:31 am #201922
@andy-parr your comments are spot on….
To the original poster, you didn’t note–are the results a pass/fail or actual continuous results or continuous results turned into a pass/fail without actual knowing what the Y’s were measured as?
Your findings are interesting and maybe you could remind them if we don’t know what is bad/good well then maybe you’re sending bad parts as good–just as valid of an error as passing good products as bad. Andy Parr’s advice is to do a gage R&R. Also, consider just because ONE person said they are “good”–maybe on the gage R&R results, you’d find they are the less precise evaluator.
You must be logged in to reply to this topic.