Objective Recall Decisions

Six Sigma – iSixSigma Forums Old Forums General Objective Recall Decisions

Viewing 10 posts - 1 through 10 (of 10 total)
  • Author
  • #52361


    I am struggling with the current manner in which my company determines when and when not to issue a product recall.  The process involves performing a one proportion test comparing the (reported defectives) / (the population size) to the AQL for the noted defect.  If the hypothesis test finds the null hypothesis that (% defective = AQL) can be rejected, we do not issue a recall.  If we find that the null hypothesis cannot be rejected we then issue a recall.  I see several problems with this practice:

    The denominator we are using is inconsistent.  In some cases we utilize an individual lot size and in others we use the total volume of product shipped.  Using the total volume of product shipped has a tendency to minimize the proportion defective and masks lots which have a higher than average % defective.
    Utilizing the entire population size represents a best case for my company because we have no idea how much product each customer has used.  This has a tendency to deflate the proportion defective.
    I am not sure that the AQL is an appropriate point of comparison since it represents a critical point for Type I sampling error and in this case may be being misused as some form of process target.
    The test is only performed if our complaints department detects a trend for a particular item or lot and will be recommending for CAPA.  While it may be true that lots which have a detected trend have a higher liklihood of satisfying the criteria for recall, this has never been thoroughly studied.  If this one proportion test is our criteria for recall, I would think it would be better to recalculate it every time we receive a complaint against a lot (at least until we have proven that the CAPA trend criteria is correlated with failures to reject the null hypothesis).
    I am not sure how effective the proportion tests are when you are dealing with defect levels on the order of 10^-4 or 5.
    I am not sure that it is appropriate to use a sample statistic when you are dealing with data from a population.  On one hand we may be being harder on ourselves because applying a confidence interval which increases our liklihood of a recall (as opposed to direct comparison of the reported % defective against the AQL).  On the other, I am not sure that it is statistically valid in this case.
    So my question to the forum is two fold:

    What are your opinions on the process I have outlined above and would you have any recommendations for improvement?
    What are some of the “world class” recall criteria that you may have seen and would be willing to share.
    I appreciate your time and thank you in advance for any feedback you may have.



    You’re pissing away a lot of money looking at when to recall a product (and pissing off a lot of customers).  Why aren’t you looking at controlling the inputs to the process so that you ensure you have an acceptable output?



    MBBinWI –
    I appreciate your concern and can assure you that we are working towards improvement of our processes everyday and that part of those initiatives is to move the control of our processes as far upstream as possible.



    Ordinarily I don’t like to do this, but I am going to give this post one bump on the outside chance I may be able to get some constructive feedback.  I apologize to the forum for the inconvenience.



    Lawyer Directory



    ha ha ha ha it is to laugh.



    Jsev607,This has got to follow the format of an FMEA.There are two reasons for a recall -1) The severity is where you are at risk of injuring or killing or
    causing your customers to run the risk of penalties and fines.
    Where you can identify your product is at fault is reason for a
    recall. Two confirmed failure analyses should be adequate to
    trigger a recall. Forget the AQL nonsense for this one.2) The costs, either in direct warranty or in loss of confidence
    and/or market share, are greater not to do a recall than they are to
    do the recall. This should be a matter of identifying the failure and
    what assumptions were made when setting the warranty and
    financial reserves. If you exceed the threshold over an agreed to
    period of time, the data should be elevated to the head of your
    business and the head financial person for your business. This
    should be written into a procedure as part of your Quality System.Both scenarios call for your FMEA to be a living document updated
    with field information and reviewed at least monthly.



    Please let us know your product line I want to be sure I don’t buy any..
    A product recall is a function of the product failure mode criticality and also the likelyhood of occurrence.
    The decision to recall can never be a statistical “coin toss” for reputable companies.
    We base product recalls on the nature of the defect, the potential failure mode, and the likleyhood of occurrence. Obviously if it is a saftey issue the defect would always be recalled.



    Gary –
    In a manner of sorts it does stem from the FMEA.  The FMEA defines the defect severity.  The AQL which is applied stems from that severity as follows:

    Critical:  .065%
    Major: 1.0%
    Minor: 2.5%
    Note:  That is not an attempt to defend the process, just explain how it currently works.
    Regarding point 1:  What you have proposed here is a recipe to put every company in the world out of business.  By the logic you have created every Six Sigma process would need to have all of its product recalled since 3.4 defects per million (assuming you’ve shipped a million) would be more than 2.  Every car would need to be taken off the road since more than 2 people die in car accidents a year.  Every shred of food that someone may have an allergic reaction to should be recalled since more than 2 people die from food allergies a year.
    The reality is that the size of the population needs to be taken into account.  If its 2 out of 10, I agree.  If its 2 out of 50 billion I do not.
    Regarding point 2:  Cost does not factor into this equation directly.  We work in a regulated environment and as such the regulatory body doesn’t give a hoot what the cost of an issue is.



    Ron –
    At first read, it does seem a little astounding and I appreciate your concern.  I share some concerns which is why I created the post.  While I agree that the decision to recall should not be based on a coin toss, I can assure you that the decision definetely would in fact include a form of analysis that is statistical in nature.  Your own method includes an assessment of the “liklihood of occurence” but that is a probabilstic term and probability and statistics are intimately linked. 
    Applying the severity x occurence criteria is typically done to address when the risk is unacceptable and corrective action is required.  Does this however mean that every time corrective action is required, that recall is required?  Are these always the same point?  How do you distinguish between fix it for next time and get it back now?

Viewing 10 posts - 1 through 10 (of 10 total)

The forum ‘General’ is closed to new topics and replies.