iSixSigma

Pareto Analysis

Six Sigma – iSixSigma Forums Old Forums General Pareto Analysis

Viewing 29 posts - 1 through 29 (of 29 total)
  • Author
    Posts
  • #31916

    Tom M
    Member

    Hello,
    I am currently working on a defective reduction 6 Sigma project in an injection molding cell. I am in the process of collecting scrap data from the operators on an ongoing basis for pareto analysis. I am hoping this will show me what the “biggest hitters” are on reasons for scrap, which mold is producing the most scrap, which operator, shift, etc. This should start to point me in the right direction. 
    My question is how much of this data should I analyze at a time for the pareto analysis? As I said, I am collecting this on an ongoing basis every day. Where is a good point to break it off for the pareto analysis? Should I analyze it by day, week, or month? So far I have about three weeks worth of this data.
    Any assistance and feedback would be greatly appreciated. Thanks very much.

    0
    #84617

    Loehr
    Member

    Hi Tom,
    Before creating the Pareto diagram, you should check to make sure that your process is in a reasonably good state of control.  It sounds like you are working with attribute data, so you have the choice of either a c, u, np , or p chart.  In addition, this chart will reveal the presence of any time-to-time variation in your process.
    Once the chart indicates that the process is in a good state of control, you can use the data from this chart to create the Pareto diagram.  Don’t include the data from any out-of-control subgroups as you want your team to focus on the reasons for common-cause variation, not assignable-cause variation.
    Hope this helps.

    0
    #84633

    Tom M
    Member

    How much data would you recommend that I analyze for these types of control charts? Should I analyze all of the data that I have collected so far? Also, wouldn’t you recommend that I look at both the assignable cause variation and the special cause variation together? Thanks for the feedback. 

    0
    #84634

    abasu
    Participant

    For the pareto analysis you can lump special and common causes of defects together. 
    As far as the amount of data to collect – make sure you have captured as many sources of variation as possible in your analysis.  such as different material lot, operator, day and night shift, mold – to – mold, cavity – to – cavity, etc. 

    0
    #84680

    Loehr
    Member

    Hi Tom,
    To create a p chart with statistically valid control limits, you should be averaging at least 3 nonconforming units per subgroup.  Thus, if your estimate of p is 1% (.01), then you should be taking a minimum subgroup size of 300 (.01 x 300 = 3).
    n > 3 / p = 3 / .01 = 300
    Then, you should have at least 25 subgroups to calculate the control limits.  Once you get this process is a good state of control, the data from the in-control subgroups will be sufficient for your Pareto analysis.
    If you have good confidence in how the the historical data were collected (in rational subgroups, are well anotated, etc.), then it would be OK to start your chart with these measurements.  However, it’s better to start with new data since you know exactly how (and when) they were gathered.
    I define special-cause and assignable-cause as being the same, that is, their causes arise from things outside of the regular process.  Common-cause variation originates from sources of variation inherent to the process, usually coming from one of the five Ms (machine, manpower, etc.). 
    The control chart will help separate these two types as out-of-control points are typically the result of assignable causes of variation appearing unexpectedly in the process.  The root causes of these process upsets are most often resolved by people close to the process (the operator, departmental supervisor) in a realtively short period of time (minutes or hours). 
    By definition, common-cause variation is there all the time.  Their sources are harder to uncover and resolve.  These are almost always the ones that six-sigma project teams address because these problems require more problem-solving expertise and time (three to four months) than the local people have available.  In many cases, their solution involves permanently changing some aspect of the process (new gage, overhaul machine, switch suppliers).  Most operators don’t have the authority to make these kind of process changes.
    This is why you should not combine both assignable and common on the same Pareto chart.  They are different types of problems requiring different problem-solving approaches.
    Hope this helps.

    0
    #84721

    Tom M
    Member

    Ross,
    Thank you very much for the great feedback. I am still a little bit confused with your reply (Sorry, I’m still learning!). Could you expound a little bit on what you mean by subgroups? Also, you stated that if you estimate that p = 1% then you should have a subgroup of 300. When you say “p=1%” do you mean the defective percentage? My last question is regarding your mention of needing at least 25 subgroups to calculate the control limits. Could you talk about that a little more as well? Sorry for all the questions, but I really appreciate everyone’s help.
    Thanks

    0
    #84731

    krishnadas
    Participant

    Sixsigma is characteristic of producing early results. The analysis may be done on the basis of data collected over a period of 3 days. Suitable preventive actions may be taken and the data collection may continue. Pareto chart may be drawn every 3 days and the cycle should continue. This may lead to a significant defect reduction leading to Six Sigma.
    S.Krishnadas

    0
    #84737

    Fontanilla
    Participant

    Tom, I think you have enough data.  It sounds like you’re just starting this project, if this true, I would hold off on the freaking control charts and make the assumption that the process is out of control.
    Now, about the data, I hope you had the opportunity to get as many stratifications (i.e. machine, shift, operator, etc.) as possible.  This will help you drill into your sources of defects otherwise guess what.  You’re back to gathering data!

    0
    #84742

    Tuell
    Participant

    How much scrap history do you have for your analysis?   Hopefully, it’s in a database somewhere, but if it isn’t, you’ll only be able to determine some of the scrap issues for the snapshot in time that you are reviewing.    I’ve found that the manufacturing associates where I work and I sometimes have a different way to describe the same problem so be wary.  

    0
    #84745

    Tom M
    Member

    I have a lot of scrap history for this analysis. However, I question the validity of the data collection process. That is why I created a new data collection plan and just began collecting the new data about three weeks ago. I’m just wondering if I should look at the data a week at a time, month at a time, etc for the pareto analysis?

    0
    #84747

    Nathan Lester
    Participant

    Tom,
    One of the most useful tools for collecting and verifying continuous data is a Paynter Chart. The Paynter chart will show you a trend over time along with containment and corrective actions. It is a very beneficial, but often under utilized, tool to assist with data collection and problem tracking.

    0
    #84749

    Tom M
    Member

    Nathan,
    The data that I am collecting is attribute data. Good vs. Not Good. I will keep that in mind for continuous data though. Thanks.

    0
    #84756

    Mike Nellis
    Participant

    Hello Tom!
    Hopefully by now you have seen where the largest amount of scrap is coming from.  Hopefully your data is pointing to a particular shift or operator because it is easier and cheaper to change behavior if there is someone not following proceedures or tweaking the machine when they shouldn’t.  If your data points to a particular machine you can start focusing on things like mold temperature variation, etc.  Good luck and let us know where the scrap is coming from.
    -Mike

    0
    #84763

    Hersey
    Participant

    Tom,
    A couple notes from a guy who is currently a Dir. of Quality and has been in charge of Engineering, Quality, and Operations at a very large custom molding plant…some of the other guys are on, and some a bit off, yet with good intentions.
    A) You probably have enough data to pareto and see what defects you need to eliminate.
    B) You probably don’t need to get too wrapped up in attribute control charts – the only acceptable outcome is zero defects right? so what if you’re in-control for a process that creating defects? Most likely you’ll actually find what’s truly causing the process to be out of control with good investigation and problem solving-variables SPC may help, Analysis of cavity pressure will truly tell you what’s going on however.  (http://www.rjg.com) In control or not, your job is to eliminate defects.  Your pareto will be more powerful than an attribute control chart.
    C) The real answer… ideally you want a system that will capture the defect info forever…so it can constantly be monitored and reacted to-creating paretos periodically as needed–ultimately transfeering this responsibility to an operations team not a black belt.  You’ll need some people with good process, tooling, and often material knowledge to eliminate the issues.
    Good luck.  

    0
    #84768

    Choudhari
    Participant

    Has the precise reason for each scrapped component recorded?
    Approximately how many readings are available in a day of 3 shifts?
     

    0
    #84769

    Tom M
    Member

    Yes, the precise reason for each scrapped part has been recorded. I have developed a data collection form specific to this job for this project.
    I’m not sure I understand your second question. We currently run approximately 400 – 500 parts per shift in this cell. We are running around 14% scrap in the cell right now. That’s about 55-70 scrapped parts on average per shift. I hope this answered your question.
    Thanks for all of the feedback everybody. It really helps!!!!

    0
    #84774

    Reinaldo Ramirez
    Participant

    My question is how much of this data should I analyze at a time for the pareto analysis? As I said, I am collecting this on an ongoing basis every day. Where is a good point to break it off for the pareto analysis? Should I analyze it by day, week, or month?ADDITIONALLY to the other responses, you should do a root cause analysis (fishbone, as it’s the simplest) in order to know if there if a possible correlation between some causes. First, fishbone the process and the do a correlation analysis. May be, these two “old” techniques will simplify your work (less data but key data).
    You could use the fishbone each day to update the situation associating it to the control chart used.

    0
    #84777

    James M. Hollingsworth, MHA
    Participant

    I think weekly summaries should work.  Also, pivot tables are a great way to present your analysis in a manner that decision makers can easily understand.

    0
    #84797

    Choudhari
    Participant

    Tom,
    Taking an avg 63 defects/shift implies >180 defects/day
    Also about 900 precise defects recordings/week
    You have written that about 3 weeks data is available.  I feel all this data is adequate for conducting the pareto.
    bye
    datta

    0
    #84801

    R Duquette
    Participant

    Suggest you perform an MSA with at least 30 units.  After the MSA is passed, used the data to determine sample size from the tollerance you set and standard deviation of the MSA data.
    Hope this helps.

    0
    #84811

    Guru Prasad.S
    Participant

    Hello, friend
    Hey i’am also working on Pareto analysis
    i think we can share a lot
    if u can give me your e-mail id
    i can reply to you back from my office mail so , that you can reply to me back to my office directly
    here is my yahoo id above mail me at this id
    hoping that u will reply,
    Take care

    0
    #84813

    Tom M
    Member

    That sounds great! I would be happy to exchange information. My e-mail address is [email protected]
    Thanks

    0
    #84829

    Et Al
    Participant

    Depending on the level of production and interest in improvement in a given area, its sometimes helpful to pick from one or all of the following 3-pronged approaches keeping it short and sweet but accurate and complete. For more impact, try pareto’s based on cost of defect or defective units but don’t go crazy with financial analysis – an approximate per unit cost works well for normalization purposes.
    1) Daily Analysis by line personnel (operators, techs, supervisor, engineer) each morning to include a very quick run-down of…
    a. Defect rates by day for the past month, identifying both improving and declining trends
    b. High drivers (top 20%) from Pareto chart of yesterdays data, drilling down as much as possible generating Pareto’s of possible causes in machine, shift, cavity, material, operator, etc, etc, etc
    c. Review of actions taken/implemented in the past month or on the parts processed in the past month, and actions initiated but which are still ‘open’, their status and most importanly their effectiveness (given the above)
    2) Weekly Analysis by management (supervisors, engineers, managers) each monday morning to include a review of the above to identify possible resource and systemic issues or the need for ‘special’ projects…
    a. Defect rates by week for the past month, identifying both improving and declining trends
    b. High drivers (top 20%) from Pareto chart of last weeks data, drilling down as much as possible generating Pareto’s of possible causes in machine, shift, cavity, material, operator, etc, etc, etc
    c. Review of actions taken/implemented in the past month or on the parts processed in the past month, and actions initiated but which are still ‘open’, their status and most importanly their effectiveness (given the above)
    3) Monthly Analysis by management AND line personnel (operators, techs, supervisors, engineers, managers) during the 1st week of each month to include a review of the above to identify progress and develop goals for the next several weeks…
    a. Defect rates by month for the past year, identifying both improving and declining trends
    b. High drivers (top 20%) from Pareto chart of last months data, drilling down as much as possible generating Pareto’s of possible causes in machine, shift, cavity, material, operator, etc, etc, etc
    c. Review of actions taken/implemented in the past month or on the parts processed in the past month, and actions initiated but which are still ‘open’, their status and most importanly their effectiveness (given the above)
    Have also seen teams add 3-times a day on the line, as well as quarterly and yearly reviews as ‘senior’ management (ISO) reviews.

    0
    #84835

    John M
    Participant

    What’s your opinion about man’s role ( 4M )in this pareto’s concept ?
    What’s the relationship between pareto’s concept with term of “customer driver” or ” product driven” ?
    At least which “quality paradigm” used in this context ?
     

    0
    #85050

    mcintosh
    Participant

    hey mann thank u very much for ur reply
    i was out of station so i could not mail
    soory
    for that from know i will be mailing to u from my office
    thank u
    lets go into the topic in detail from next mail bye
    take care
    guru

    0
    #85193

    Philip Whateley
    Participant

    The key with Pareto analysis is to be sure that the system of causes is stable over time. If it’s not, then you risk getting a different Pareto sequence each time you do the analysis, and you could then find, for example, that if you select the top problem based on month one’s data, to work on in month two, it may no longer be the most significant problem! This often shows up when you aggregate the Pareto over a number of months, when the chart will appear to be very “flat”. This is why it is important to check that the Pareto does approximate to the 80/20 rule. Take a look at some of the books by Don Wheeler of SPC Press (Understanding Statistical Process Control is good)

    0
    #85281

    Manoj Bhardwaj
    Participant

    Dear Friend, 
    Deciding about the cut off period for data analysis should be decided on the the frequency and severity of the defect rate and the desired specefications for the acceptance of the defects.
    If the “defects” are occurring say,once in a week , you may collect data for say 30 weeks. If the frequency of “objectionable” defects is ,say once a day,you may need to collect data for a minimum period of 35 days.
    I request you and other friends to give a feedback at [email protected]
    Manoj Bhardwaj
     
     

    0
    #86015

    Et Al
    Participant

    Ahh BARB, the science of vocologoy

    0
    #86028

    Wagh
    Participant

    Hi Tom,
    1. We did  check for statistical control and found that the defect % was outside the 3 sigma limits on one day only.
    2. Pivot tables are great. We have used them for our pareto of attribute data.
    3. 500 products produced per day ( all products chkd each day I presume ) with a 14 %  scrap rate. You have 3 weeks of data . Get another week’s data and make the pareto analysis .
    4. Try and see if the findings are similar for another cell ( a group of machines I presume ).
    We did find though that for the same product the big hitters were different for different sewing lines ( cells in your case ). It is an assembly line and so the task was to establish as to which operation generates the maximum defects and what is the nature of this defect in each line.
    We ranked the top 7  big hitters from 6 different lines and took up the ones which appeared in all the lines.
    Having done that we want to do a C & E matrix and FMEA for that particular assembly operation to reduce the defects.
    Hope the above was useful.
     

    0
Viewing 29 posts - 1 through 29 (of 29 total)

The forum ‘General’ is closed to new topics and replies.