iSixSigma

FMEA – Detectability number is confusing

Six Sigma – iSixSigma Forums Old Forums General FMEA – Detectability number is confusing

Viewing 25 posts - 1 through 25 (of 25 total)
  • Author
    Posts
  • #48604

    Sarma Pisapati
    Member

    Recently, I was on a project that needs FMEA analysis for datacenter high-availabiliy. Many of the participants are not comfortable on “Detectability” definition, higher is the worst.  I understand that it is important to represent the “worst” with highest number so as to make RPN (S*O*D) higher for the worst. However, many participants are making mistakes in feeding the information. I think it would be better to find another suitable term – such as un-detectability or non-detectability, etc.

    0
    #164451

    Putnam
    Participant

    Whatever works for your group and reduces confusion.  You can even give them alpha notations (A, B, C, …).  Your final F*S*D terms will look like 15A or 35B, but you’ll still be able to sort and rank them.  This might work best if your detection definition is a bit fuzzy or nonspecific.

    0
    #164452

    Ward
    Participant

    If the team is struggling with the concept of the worse the detectability the lower the score, then don’t depend them to assign the values. Let them assign the detectability using the subjective approach (more than once per day to once every 5-100 years)or the the probability (>20% to 20% to 20% to

    0
    #164453

    Sarma Pisapati
    Member

    This may confuse with “Occurance”. I am working on FMEA related to “Software & Hardware” components, where “Systems Management Monitoring tools” help to detect the failure.
    The goal is to detect any component failure much earlier before the damange occurs or before the end-user reports the problem.  I know for some components, it may not be possible, where we may have to use redundent components.

    0
    #164455

    Ward
    Participant

    My bad. I was multi-tasking during the response and addressed the wrong scale. For your team you could just ask the to determine the detectability from the following scale:
    10. Defect Undetectable
    9. Units sporadically checked for defects
    8. Units systematically sampled and inspected
    7. All units manually inspected
    6. Mistake-proofing modifications after inspection
    5. SPC and manual inspection
    4. SPC w/immediate reaction to out-of-control
    conditions
    3. SPC w/100% inspection surrounding out-of-control
    conditions
    2. 100% automatic inspections
    1. Defect is obvious and can be kept from reaching
    customer.Let them agree to characteristic on the right and do not even show them the number associated with it. Let them review the RPN when you are done, and it should make sense to them.

    0
    #164457

    BTDT
    Participant

    Sarma:I’ve worked with hundreds of teams and this is always the one scale on an FMEA that gives the team trouble. It seems logically backwards. I acknowledge this up front to make sure everyone understands why the scale is the way it is.I usually repeat this for each failure mode as we are working through the FMEA.Cheers, Alastair

    0
    #164458

    Ward
    Participant

    I concur with Alastair. I would rather the team try and understand the logic of the higher number for the lower detectability.

    0
    #164459

    Taylor
    Participant

    This has always worked for me

    Detection
    Criteria: Likelihood the existence of a defect will be detected by process controls before next or subsequent process, or before product leaves the processing location.
    Ranking

    Almost Impossibe
    No known control(s) available to detect failure mode.
    10

    Very Remote
    Very remote likelihood current control(s) will detect failure mode.
    9

    Remote
    Remote likelihood current control(s) will detect failure mode.
    8

    Very Low
    Very low likelihood current control(s) will detect failure mode.
    7

    Low
    Low likelihood current control(s) will detect failure mode.
    6

    Moderate
    Moderate likelihood current control(s) will detect failure mode
    5

    Moderately High
    Moderately high likelihood current control(s) will detect failure mode
    4

    High
    High likelihood current control(s) will detect failure mode
    3

    Very High
    Very high likelihood current control(s) will detect failure mode
    2

    Almost Certain
    Current control(s) almost certain to detect failure mode.  Reliable detection controls are known with similar processes.
    1

    0
    #164460

    Taylor
    Participant

    I meant to add all Criteria:

    Effect
    Criteria: Severity of Effect
    Ranking

    Hazardous — without warning
    May endanger consumer.  Very high severity ranking when a potential failure mode affects fitness for use and efficacy of product and/or noncompliance with government regulation.  Failure will occur without warning.
    10

    Hazardous — with warning
    May endanger consumer.  Very high severity ranking when a potential failure mode affects fitness for use and efficacy of product and/or noncompliance with government regulation.  Failure will occur without warning.
    9

    Very High
    Major impact upon product flow and/or probability of acceptance.  100% of product may have to be scrapped.  Product is nonfunctional.  Consumer very dissatisfied.
    8

    High
    Minor impact upon product flow and/or probability of acceptance.  A portion of the product (<100%) may have to be scrapped.  Product is functional, but at a reduced level (e.g., cosmetic defects).  Consumer dissatisfied.
    7

    Moderate
    Minor impact upon product flow and/or probability of acceptance.  A portion of the product (<100%) may have to be scrapped.  Product is functional, but at a reduced level (e.g., cosmetic defects).  Consumer experiences discomfort.
    6

    Low
    Minor impact upon product flow.  100% of the product may have to be reworked.  Product is functional, but at a reduced level.  Consumer experiences some dissatisfaction.
    5

    Very Low
    Minor impact upon product flow.  The product may have to be sorted and a portion (<100%) reworked.  Cosmetic defects noticed by most consumers.
    4

    Minor
    Minor impact upon product flow.  A portion of the product (<100%) may have to be reworked.  Cosmetic defects noticed by consumers.
    3

    Very Minor
    Minor impact upon product flow.  A portion of the product (<100%) may have to be reworked.  Cosmetic defects noticed by discriminating consumers.
    2

    None
    No effect
    1

    Probability of Failure
    OCCURRENCE (Possible Failure Rates)
    Ppk
    Ranking

    Very High
    > 1 in 2
    < 0.33
    10

    1 in 3
    >0.33
    9

    High
    1 in 8
    >0.51
    8

    1 in 20
    >0.67
    7

    Moderate
    1 in 80
    >0.83
    6

    1 in 400
    >1.00
    5

    1 in 2,000
    >1.17
    4

    Low
    1 in 15,000
    >1.33
    3

    Very Low
    1 in 150,000
    >1.50
    2

    Remote
    < 1 in 1,500,000
    >1.67
    1

    Detection
    Criteria: Likelihood the existence of a defect will be detected by process controls before next or subsequent process, or before product leaves the processing location.
    Ranking

    Almost Impossibe
    No known control(s) available to detect failure mode.
    10

    Very Remote
    Very remote likelihood current control(s) will detect failure mode.
    9

    Remote
    Remote likelihood current control(s) will detect failure mode.
    8

    Very Low
    Very low likelihood current control(s) will detect failure mode.
    7

    Low
    Low likelihood current control(s) will detect failure mode.
    6

    Moderate
    Moderate likelihood current control(s) will detect failure mode
    5

    Moderately High
    Moderately high likelihood current control(s) will detect failure mode
    4

    High
    High likelihood current control(s) will detect failure mode
    3

    Very High
    Very high likelihood current control(s) will detect failure mode
    2

    Almost Certain
    Current control(s) almost certain to detect failure mode.  Reliable detection controls are known with similar processes.
    1
     
     

    0
    #164461

    Sarma Pisapati
    Member

    I don’t know who started this confusing terminology. It will be real pain when thousands of items are to be evaluated by various groups. I think it should be consistant in scale & terminology so that generations down the road are comfortable in providing the input. Having a low number for low and high number for high through out the analysis, I would recommend two options -a) With DETECTABILTIY terminology: RPN = S*O/D; b); With UNDETECTABILITY terminology: RPN = S*O*D.

    0
    #164462

    BTDT
    Participant

    Chad:It’s always strange to ask a team to find out how many defects are NOT detected. It’s like asking how many people have successfully embezzled money from the company.Your list is a nice scale for detectability. We usually modify the scale for the particular project using the worst case (10) and the best case (1) to get a good range. We can sometimes define the rating using scenarios like;”If you processed 1,000 claims each week and you somehow knew 100 had missing information, how many times would someone pick them up at this process step? Would it be 1, 3, 10, 30, 100?”We can refine some of the numbers by comparing two similar ratings by asking,”Of these two failure modes, which is likely to be detected better (or worse)?”You can sometimes get some information using the number of defects reported (detected) at various process steps.Cheers, Alastair

    0
    #164463

    Mikel
    Member

    In an area where extensive process and customer data is available, the
    rating is an escape rate based on data. It’s not always available but any compnay with extensive warranty
    exposure should have the data. You may have to dig it out of the
    accounting records.

    0
    #164464

    Mikel
    Member

    Wow, a lot of talk about something simple. The rating represents risk – the higher the rating the greater the risk.

    0
    #164466

    Ward
    Participant

    I agree. It really is pretty simple. I get concerned when we feel it is necessary to “dumb-it down”. The team members might hurt for a little as learning is occurring but that’s not a bad thing.
    “Everyday is an opportunity to learn something new.”

    0
    #164468

    Sarma Pisapati
    Member

    Yes, it is simple for you and me. But, with several groups in real-world situation, it is prone to errors. The debate is on terminology. The term “Detectability” and the assigned number on the chart is leading for confusion.

    0
    #164470

    Mikel
    Member

    Yea, I understand.
    That term “risk” is a tough one to get ones head around.

    0
    #164471

    Taylor
    Participant

    Alastair, I put together these detection criteria based on several sources and molded into one that “Works for Me”. I agree with what you say, but I had to determine what the team I was dealing with regulary could comprehend.
    Have a reference to look at, not only cleared confusion about the rating, but alson created thought around the number that was assigned and I found it very helpful
     

    0
    #164472

    Plegiarized?
    Participant

    You developed those?
    Looks like almost a straight rip off of AIAG.

    0
    #164473

    Taylor
    Participant

    I didn’t say I developed these, what I said “I put together these detection criteria based on several sources and molded into one that “Works for Me””
    Several Sources being Key. Plegiarized, I guess, Or maybe just resourceful…….Yes Some came from AIAG,
    Some of it came from Six Sigma Academy (2002 GOAL/QPC)
    Some of it came from Lean Six Sigma by Michael L George Pg 190-191
    And other confidential and proprietory to TYCO International LTD training documents for week #1 GB
    Good Day  

    0
    #164474

    Dr. Scott
    Participant

    Sarma,
    Just use some sort of metaphor to explain it to your people. Something like one of those cop shows on TV. What is the likelihood we can catch (detect) a thief (defect) based on the investigative process and resources we have in the police force (inspection). The more in doubt we are that we can identify who the thief is the higher the risk that another crime will be committed by him/her. Therefore the risk number (RPN) should go up.
    Try something like that, maybe it will work.
    Good Luck,
    Dr. Scott

    0
    #164476

    Plegiarized?
    Participant

    You did a lot of work to come up with something that was already in
    AIAG. Your detection is no better and no more helpful than theirs.

    0
    #164477

    Brandon
    Participant

    Someone may want to plagiarize a dictionary and correctly spell the word.

    0
    #164489

    Ward
    Participant

    I was so resisting the urge to post that same response.

    0
    #164491

    Brandon
    Participant

    Pete – my will power is less than yours. Oh well.

    0
    #164626

    Long
    Member

    It seems to me many of the posts in this forum have unstated assumptions about the type of FMEA. 
    I’m a design engieer for implantable medical devices (high risk).  We use FMEA on every product.  We have team members from engineering, marketing, clinical/nursing and regulatory – and the team gets confused about this topic almost every time.  I have been working to define Detectability based on the type of FMEA:
    Design FMEA – how detectable (thing preventable) is the Cause of this failure (line on the FMEA table).  That is, if the cause is a poor weld, and welding is a new process for us, what will we do (mitigate) during the design and development to prevent this.
    Process FMEA – how detectable (think preventable) is this Manufacturing Cause of this failure (line on the FMEA table).  That is, if two parts are bonded during assembly with the wrong aligment to each other, what can be done during assembly to mitigate this?  Mitigation could be create an alignment fixture or request a design change so that the parts can not be assembled in the wrong orientation.
    Application FMEA (User FMEA) – how dectable is an in-use problem by the user in his work environment, such that he can recognize the problem before it happens (or the equipment self detects and gives the user warning with enough time to correct the situation).  For us, this is a good for evaluating risk when planning to allow use of our existing products in new user environments.  Example – using a general IV Pump on newborn babies.  The maximum pumping rate can be so high that a days worth of drug is administered in 1 hour – killing the patient.  A mitigation might be a designed in warning, the pump feeds back the settings and requests user confirmation because of the high pumping rate request.  A product acceptable in one user-environment can be risky when used in another environment.
     
     

    0
Viewing 25 posts - 1 through 25 (of 25 total)

The forum ‘General’ is closed to new topics and replies.