FMEA – Detectability number is confusing
Six Sigma – iSixSigma › Forums › Old Forums › General › FMEA – Detectability number is confusing
- This topic has 24 replies, 10 voices, and was last updated 14 years, 7 months ago by
Long.
-
AuthorPosts
-
November 6, 2007 at 2:03 pm #48604
Sarma PisapatiMember@Sarma-PisapatiInclude @Sarma-Pisapati in your post and this person will
be notified via email.Recently, I was on a project that needs FMEA analysis for datacenter high-availabiliy. Many of the participants are not comfortable on “Detectability” definition, higher is the worst. I understand that it is important to represent the “worst” with highest number so as to make RPN (S*O*D) higher for the worst. However, many participants are making mistakes in feeding the information. I think it would be better to find another suitable term – such as un-detectability or non-detectability, etc.
0November 6, 2007 at 2:12 pm #164451Whatever works for your group and reduces confusion. You can even give them alpha notations (A, B, C, …). Your final F*S*D terms will look like 15A or 35B, but you’ll still be able to sort and rank them. This might work best if your detection definition is a bit fuzzy or nonspecific.
0November 6, 2007 at 3:44 pm #164452If the team is struggling with the concept of the worse the detectability the lower the score, then don’t depend them to assign the values. Let them assign the detectability using the subjective approach (more than once per day to once every 5-100 years)or the the probability (>20% to 20% to 20% to
0November 6, 2007 at 4:52 pm #164453
Sarma PisapatiMember@Sarma-PisapatiInclude @Sarma-Pisapati in your post and this person will
be notified via email.This may confuse with “Occurance”. I am working on FMEA related to “Software & Hardware” components, where “Systems Management Monitoring tools” help to detect the failure.
The goal is to detect any component failure much earlier before the damange occurs or before the end-user reports the problem. I know for some components, it may not be possible, where we may have to use redundent components.0November 6, 2007 at 6:00 pm #164455My bad. I was multi-tasking during the response and addressed the wrong scale. For your team you could just ask the to determine the detectability from the following scale:
10. Defect Undetectable
9. Units sporadically checked for defects
8. Units systematically sampled and inspected
7. All units manually inspected
6. Mistake-proofing modifications after inspection
5. SPC and manual inspection
4. SPC w/immediate reaction to out-of-control
conditions
3. SPC w/100% inspection surrounding out-of-control
conditions
2. 100% automatic inspections
1. Defect is obvious and can be kept from reaching
customer.Let them agree to characteristic on the right and do not even show them the number associated with it. Let them review the RPN when you are done, and it should make sense to them.0November 6, 2007 at 6:15 pm #164457Sarma:I’ve worked with hundreds of teams and this is always the one scale on an FMEA that gives the team trouble. It seems logically backwards. I acknowledge this up front to make sure everyone understands why the scale is the way it is.I usually repeat this for each failure mode as we are working through the FMEA.Cheers, Alastair
0November 6, 2007 at 6:33 pm #164458I concur with Alastair. I would rather the team try and understand the logic of the higher number for the lower detectability.
0November 6, 2007 at 6:43 pm #164459
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.This has always worked for me
Detection
Criteria: Likelihood the existence of a defect will be detected by process controls before next or subsequent process, or before product leaves the processing location.
RankingAlmost Impossibe
No known control(s) available to detect failure mode.
10Very Remote
Very remote likelihood current control(s) will detect failure mode.
9Remote
Remote likelihood current control(s) will detect failure mode.
8Very Low
Very low likelihood current control(s) will detect failure mode.
7Low
Low likelihood current control(s) will detect failure mode.
6Moderate
Moderate likelihood current control(s) will detect failure mode
5Moderately High
Moderately high likelihood current control(s) will detect failure mode
4High
High likelihood current control(s) will detect failure mode
3Very High
Very high likelihood current control(s) will detect failure mode
2Almost Certain
Current control(s) almost certain to detect failure mode. Reliable detection controls are known with similar processes.
10November 6, 2007 at 6:47 pm #164460
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.I meant to add all Criteria:
Effect
Criteria: Severity of Effect
RankingHazardous — without warning
May endanger consumer. Very high severity ranking when a potential failure mode affects fitness for use and efficacy of product and/or noncompliance with government regulation. Failure will occur without warning.
10Hazardous — with warning
May endanger consumer. Very high severity ranking when a potential failure mode affects fitness for use and efficacy of product and/or noncompliance with government regulation. Failure will occur without warning.
9Very High
Major impact upon product flow and/or probability of acceptance. 100% of product may have to be scrapped. Product is nonfunctional. Consumer very dissatisfied.
8High
Minor impact upon product flow and/or probability of acceptance. A portion of the product (<100%) may have to be scrapped. Product is functional, but at a reduced level (e.g., cosmetic defects). Consumer dissatisfied.
7Moderate
Minor impact upon product flow and/or probability of acceptance. A portion of the product (<100%) may have to be scrapped. Product is functional, but at a reduced level (e.g., cosmetic defects). Consumer experiences discomfort.
6Low
Minor impact upon product flow. 100% of the product may have to be reworked. Product is functional, but at a reduced level. Consumer experiences some dissatisfaction.
5Very Low
Minor impact upon product flow. The product may have to be sorted and a portion (<100%) reworked. Cosmetic defects noticed by most consumers.
4Minor
Minor impact upon product flow. A portion of the product (<100%) may have to be reworked. Cosmetic defects noticed by consumers.
3Very Minor
Minor impact upon product flow. A portion of the product (<100%) may have to be reworked. Cosmetic defects noticed by discriminating consumers.
2None
No effect
1Probability of Failure
OCCURRENCE (Possible Failure Rates)
Ppk
RankingVery High
> 1 in 2
< 0.33
101 in 3
>0.33
9High
1 in 8
>0.51
81 in 20
>0.67
7Moderate
1 in 80
>0.83
61 in 400
>1.00
51 in 2,000
>1.17
4Low
1 in 15,000
>1.33
3Very Low
1 in 150,000
>1.50
2Remote
< 1 in 1,500,000
>1.67
1Detection
Criteria: Likelihood the existence of a defect will be detected by process controls before next or subsequent process, or before product leaves the processing location.
RankingAlmost Impossibe
No known control(s) available to detect failure mode.
10Very Remote
Very remote likelihood current control(s) will detect failure mode.
9Remote
Remote likelihood current control(s) will detect failure mode.
8Very Low
Very low likelihood current control(s) will detect failure mode.
7Low
Low likelihood current control(s) will detect failure mode.
6Moderate
Moderate likelihood current control(s) will detect failure mode
5Moderately High
Moderately high likelihood current control(s) will detect failure mode
4High
High likelihood current control(s) will detect failure mode
3Very High
Very high likelihood current control(s) will detect failure mode
2Almost Certain
Current control(s) almost certain to detect failure mode. Reliable detection controls are known with similar processes.
1
0November 6, 2007 at 7:08 pm #164461
Sarma PisapatiMember@Sarma-PisapatiInclude @Sarma-Pisapati in your post and this person will
be notified via email.I don’t know who started this confusing terminology. It will be real pain when thousands of items are to be evaluated by various groups. I think it should be consistant in scale & terminology so that generations down the road are comfortable in providing the input. Having a low number for low and high number for high through out the analysis, I would recommend two options -a) With DETECTABILTIY terminology: RPN = S*O/D; b); With UNDETECTABILITY terminology: RPN = S*O*D.
0November 6, 2007 at 7:13 pm #164462Chad:It’s always strange to ask a team to find out how many defects are NOT detected. It’s like asking how many people have successfully embezzled money from the company.Your list is a nice scale for detectability. We usually modify the scale for the particular project using the worst case (10) and the best case (1) to get a good range. We can sometimes define the rating using scenarios like;”If you processed 1,000 claims each week and you somehow knew 100 had missing information, how many times would someone pick them up at this process step? Would it be 1, 3, 10, 30, 100?”We can refine some of the numbers by comparing two similar ratings by asking,”Of these two failure modes, which is likely to be detected better (or worse)?”You can sometimes get some information using the number of defects reported (detected) at various process steps.Cheers, Alastair
0November 6, 2007 at 7:45 pm #164463In an area where extensive process and customer data is available, the
rating is an escape rate based on data. It’s not always available but any compnay with extensive warranty
exposure should have the data. You may have to dig it out of the
accounting records.0November 6, 2007 at 7:49 pm #164464Wow, a lot of talk about something simple. The rating represents risk – the higher the rating the greater the risk.
0November 6, 2007 at 8:00 pm #164466I agree. It really is pretty simple. I get concerned when we feel it is necessary to “dumb-it down”. The team members might hurt for a little as learning is occurring but that’s not a bad thing.
“Everyday is an opportunity to learn something new.”0November 6, 2007 at 9:07 pm #164468
Sarma PisapatiMember@Sarma-PisapatiInclude @Sarma-Pisapati in your post and this person will
be notified via email.Yes, it is simple for you and me. But, with several groups in real-world situation, it is prone to errors. The debate is on terminology. The term “Detectability” and the assigned number on the chart is leading for confusion.
0November 6, 2007 at 9:21 pm #164470Yea, I understand.
That term “risk” is a tough one to get ones head around.0November 6, 2007 at 9:25 pm #164471
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.Alastair, I put together these detection criteria based on several sources and molded into one that “Works for Me”. I agree with what you say, but I had to determine what the team I was dealing with regulary could comprehend.
Have a reference to look at, not only cleared confusion about the rating, but alson created thought around the number that was assigned and I found it very helpful
0November 6, 2007 at 9:29 pm #164472
Plegiarized?Participant@Plegiarized?Include @Plegiarized? in your post and this person will
be notified via email.You developed those?
Looks like almost a straight rip off of AIAG.0November 6, 2007 at 10:17 pm #164473
TaylorParticipant@Chad-VaderInclude @Chad-Vader in your post and this person will
be notified via email.I didn’t say I developed these, what I said “I put together these detection criteria based on several sources and molded into one that “Works for Me””
Several Sources being Key. Plegiarized, I guess, Or maybe just resourceful…….Yes Some came from AIAG,
Some of it came from Six Sigma Academy (2002 GOAL/QPC)
Some of it came from Lean Six Sigma by Michael L George Pg 190-191
And other confidential and proprietory to TYCO International LTD training documents for week #1 GB
Good Day0November 7, 2007 at 12:44 am #164474
Dr. ScottParticipant@Dr.-ScottInclude @Dr.-Scott in your post and this person will
be notified via email.Sarma,
Just use some sort of metaphor to explain it to your people. Something like one of those cop shows on TV. What is the likelihood we can catch (detect) a thief (defect) based on the investigative process and resources we have in the police force (inspection). The more in doubt we are that we can identify who the thief is the higher the risk that another crime will be committed by him/her. Therefore the risk number (RPN) should go up.
Try something like that, maybe it will work.
Good Luck,
Dr. Scott0November 7, 2007 at 3:14 am #164476
Plegiarized?Participant@Plegiarized?Include @Plegiarized? in your post and this person will
be notified via email.You did a lot of work to come up with something that was already in
AIAG. Your detection is no better and no more helpful than theirs.0November 7, 2007 at 4:27 am #164477
BrandonParticipant@BrandonInclude @Brandon in your post and this person will
be notified via email.Someone may want to plagiarize a dictionary and correctly spell the word.
0November 7, 2007 at 4:35 pm #164489I was so resisting the urge to post that same response.
0November 7, 2007 at 4:52 pm #164491
BrandonParticipant@BrandonInclude @Brandon in your post and this person will
be notified via email.Pete – my will power is less than yours. Oh well.
0November 11, 2007 at 5:09 pm #164626It seems to me many of the posts in this forum have unstated assumptions about the type of FMEA.
I’m a design engieer for implantable medical devices (high risk). We use FMEA on every product. We have team members from engineering, marketing, clinical/nursing and regulatory – and the team gets confused about this topic almost every time. I have been working to define Detectability based on the type of FMEA:
Design FMEA – how detectable (thing preventable) is the Cause of this failure (line on the FMEA table). That is, if the cause is a poor weld, and welding is a new process for us, what will we do (mitigate) during the design and development to prevent this.
Process FMEA – how detectable (think preventable) is this Manufacturing Cause of this failure (line on the FMEA table). That is, if two parts are bonded during assembly with the wrong aligment to each other, what can be done during assembly to mitigate this? Mitigation could be create an alignment fixture or request a design change so that the parts can not be assembled in the wrong orientation.
Application FMEA (User FMEA) – how dectable is an in-use problem by the user in his work environment, such that he can recognize the problem before it happens (or the equipment self detects and gives the user warning with enough time to correct the situation). For us, this is a good for evaluating risk when planning to allow use of our existing products in new user environments. Example – using a general IV Pump on newborn babies. The maximum pumping rate can be so high that a days worth of drug is administered in 1 hour – killing the patient. A mitigation might be a designed in warning, the pump feeds back the settings and requests user confirmation because of the high pumping rate request. A product acceptable in one user-environment can be risky when used in another environment.
0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.