Six sigma associated to the efficiency of a production line
Six Sigma – iSixSigma › Forums › Old Forums › General › Six sigma associated to the efficiency of a production line
 This topic has 9 replies, 4 voices, and was last updated 19 years, 4 months ago by Wagh.

AuthorPosts

March 27, 2002 at 3:42 pm #29113
Marc FernandezParticipant@MarcFernandez Include @MarcFernandez in your post and this person will
be notified via email.Hello
I’ll be very pleased if someone can help me to calculate the sigma level of a production line.
I have the efficiency data by shift and the minutes the line has been working each shift, I know the number of minutes in each efficiency interval, f.ex. 3000 min between efficiency 60%65 %, 2500 min between efficiency 65%70 %, and so on.
I want to know if I have to define a lower efficiency limit and to consider variable data ( so defects are the minutes under the limit) or to consider data as attributes ( so defects are the minutes we need to reach 100%, f. ex. an efficiency of 65%=>35%of total minutes are defects)
Kind regards.
Thank you very much.
0March 27, 2002 at 4:53 pm #73724
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,
Are the “number of minutes available in each efficency interval” equal?0March 27, 2002 at 5:22 pm #73727
Marc FernandezParticipant@MarcFernandez Include @MarcFernandez in your post and this person will
be notified via email.Thank you for your interest Mike.
The analysis I’ve made is to allocate the number of minutes I have with efficiency 63,9% (f.example) into the respective interval (60%65% in the example). These number of minutes come from the shifts, shift 1 number of minutes 480, efficiency 53%, shift 2, number of minutes 350, efficiency 76%….. then I have calculated the sume of minutes of all the shifts, I have in each interval, interval 75%80% 350 min.for example.
may be I’ve made clear my problem to you, if not excuse me.
Regards.
Marc (Barcelona, Spain)0March 27, 2002 at 6:36 pm #73729
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,
Barcelona is one of my favorite cities in Europe. The beach is great and the Picasso museum is great. Subway made it really easy to get around. Miguel Hernandez (from Mexicali) and I taught week Analize and Control there in the Hilton Hotel. Used simultaneous translation – that was an experience.
The reason I asked about the fixed size was if First shift was always 480 minutes and second was always 350, we could set up the analysis using minutes. If we don’t have fixed intervals we will need to use the percentages (leave them in decimal form). The spec is 1.0. Try running the data that way.
Did that make sense?0March 28, 2002 at 5:05 am #73744
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,
I may have you headed in the wrong direction. Using the decimal equivalents is the correct thing to do since the number of minute available is moving around. Part of the problem in stating line capability in rerms of sigma is going to be misinformation. If you are at 6 sigma it is going to be a bad thing. The smaller the number the better your performance.
Where I wanted to go was an index we used to use called Cpt. Capability to a target. It is basically the Taguchi Loss Function. It states your capability in terms of your ability to hit a target which is what we want. The problem is the target and the upper limit are the same number. You have to have a USL because the numerator calculation is USL – LSL. and we are bounded by 1.0.
If we turn it upside down and use down time to a target of 0 it is the same problem.
I will get you an answer by tomorrow. I need to do a little research.
0March 28, 2002 at 8:14 am #73751
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,
I got some numbers to run on Minitab. I think the file might help you get started or at least understand what we are trying to do.
In a normal calculation if the mean sits on the spec the Cpk will be 0. In this case you want the mean to be on 0 and the standard deviation to be 0 as well. If we use the normal calculation for Cpk as you improve your project the capability number will make it appear to get worse. If we use Cpm (Cpt) as it approaches the target it gets larger and as the variance decreases it gets larger. The metric is reflecting what you need it to reflect. I am not sure how you will explain it to anyone with some stats background when you tell them your progect in traditional terms is to take the process to zero sigma.
If you will send me an email I will send you the file. It is a Minitab file. If you do not have Minitab you can down load it from the web (free 30 day trial).0March 28, 2002 at 10:01 am #73754
James A.Participant@JamesA. Include @JamesA. in your post and this person will
be notified via email.Good Morning/afternoon/evening Mike,
Not being above displaying an unerring ability to ask some really dumb questions, having read this thread I can’t help wondering if this ‘problem’ is in fact similar to, say. a process with a zerobased tolerance, e.g. ovality, and calculating process capability.
In that example, it would be normal to subtract the process mean from the UTL and divide the result by 3 SD. Can’t Marc use the same approach for his problem?
Just a thought. Thanks for all the positive contributions.
Regards
James A.0March 28, 2002 at 8:15 pm #73802
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.James,
Thank you for the compliment.
Here is the problem, I see, with that approach and it may be that I am can’t see the forest for the trees any longer. We want to drive the mean to 1.0 and the std dev to zero. If I leave it in those terms and try to increase my value of sigma I go the wrong direction. Wouldn’t themetod you suggested cause me the same problem? I am not real clear any longer. Cpm is like the inverse of that number. This is one of those sneaky questions. It looks innocent enough so you jump in and then you find out your in the deep end of the pool.
I used Minitab to generate data (30 datums) with a mean of .68 and a std dev. of .05. My Cpk is 2 and change. The Cpm is .5. As I get closer the Cpm goes up which follows the logic behind Taguchi. The reason the numbers do not seem to have any real reference to each other is that in a Cpm calculation the mean is ignored and the expected value – observed is used. That is why it seems to be like the correct metric. It is not really optimized until every point is at 0.
We have had a lot of people discussing it this morning and nobodt really agrees on how to explain the measurement. We don’t want Marc going off to some review and have some management type get wrapped around the axel and leave marc with no definite explanation.
The other possibility is we are wrong.
How’s that for attribute data.
If you see something here jump in and help us out.
Thanks,
Mike0March 30, 2002 at 10:41 pm #73845
Mike CarnellParticipant@MikeCarnell Include @MikeCarnell in your post and this person will
be notified via email.Marc,
Sorry about the delay in the answer. I tprobably would have come much quicker but I has strcken with a terminal case of tunnel vision. I can credit my recovery and a huge amount of help to a group of people who have tolerated my petulace since this started. Just to recognize the group that helped Minitab – who where fighting down time of their own with no power (Jeff Ozarski, Michelle Shimo, Andy __?__, and Keith Bower), Dr. Shree Nanguneri, Scot Shank, and James A. (from a previous post – who was a lot closer to the correct answer than I was).
Lets start with what you need. A sigma value. Unfortunately it is used to drive you a “safe” distance from a spec. You don’t want to be back off the spec (target of 100%). You want to dead on it with a std. dev of zero. We can use the data the way we previously discussed by leaving the percent in decimal form – make sure your sample size stays over 30 – you are approximating the normal distribution. You want to drive the mean up from a lower specification. When you did your Annual Operating Plan (AOP) they probably used an OEE type number to set staffing, capacity, etc. Hopefully it is a number wher you are profitable. This is the lower spec. Leave the upper off then it will only calculate to the Lower Spec Limit (LSL). You can’t, in your case, get to far above it. The other danger is that if you are at six Std. Dev. above it some one might decide the job is done – and depending on the plant Pareto it might be. Do your sigma calculation from the LSL.
This number is not going to make any difference in how you solve the downtime issue. It is simply a reporting number. This is what we call “feeding the gorilla.” Maybe guerrilla depending on where you work.
Cpm is still the right metric for what you are doing but it is not going to give you a sigma value. It is like the Taguchi loss function. Paraphrased “the further you depart from your target the loss that is imparted to society” which is exactly what down time is. This is the same metric we used when we were sputtering gold on sensors. You plate to a target which is as thin as possible without putting your customer at risk. You spec is the same issue except you are maximizing not minimizing. Doug Montgomery has it written up in his book “Introduction to Statistical Quality Control” page 366 (the reference is from Keith Bower at Minitab). If you use it in a reportout wear kevlar – this is a departure from expected and the Type A’s go ballistic rather than try to understand. If you want to use it break it to them slowly.
Good luck.0June 8, 2003 at 5:15 pm #86751Good day Mike,
Please help me …. I have a similar problem.
Its a bit long but pls bear with me.
Let me explain with a little background info …
a) An assembly line consisting of say 23 operations ( steps ) . The standard minute value (smv )for each step ( at 100 % efficiency ) is between 1 to 2.5 minutes. A complement of 50 operators is provided for a line.
b) Each operation is carried out as a set of 40 pieces i.e. an operator completes 40 pieces before passing the set to the next operation in the sequence. The efficiency of each operator is captured visavis the SMV on a shift ( 8 hours )basis.
c) Each line has an ‘available time’ of 7.5*60*50 minutes i.e. 22500 minutes ( leaving out 30 min for a break ).
d) Every operator on completion of a set of 40 pieces submits a coupon. All coupons submitted for a particular shift by a particular line get added up to give the ‘efficiency minutes’ of the Line.
e) The downtime ( DT ) is docked for every machine and this is expressed as a % of 22500 min.
f) The loss of ‘available time’ due to any absentees in the line is also worked out for reducing the same.
g) The available time – min lost due to DT – min lost due to absentees is calculated everyday.
h) Line efficiency is calculated by 100 *eff min ( point no. 4 ) / available min ( point no. 7 ) .
We set of on a project to improve our line efficiencies using 6 sigma tools. This project is a pilot project which is meant to understand six sigma and implement in other processes.
1. Efficiencies do not have a LSL or USL. The higher the better. Some lines are running at 105 % and some run at 50 %. Can we use a sigma level to differentiate these two lines performance or some other parameter ?
2. We have efficiency as the big Y and attack the small y’s to improve the big Y ?
3. we had a brain storming session and did a fishbone dia
4. Several things both outside and inside the line impact efficiencies. We are interested in first attacking the common cause variations ( small y’s ) inside the production line assuming all else are special causes . Are we right in doing this ?
5. We did a attribute gage R & R and are working toward improving it.
6. KPIV’s for line efficiency are absenteeism % of operators doing the difficult operations , attrition level of operators doing difficult operations , operator’s skill as measured by rework %, multi skilling of operator, supervisory skill , raw material quality , on time availability of RM , training imparted , product ( style ) change , frequency of product ( style ) changes.
Are our KPIV’s ok ? Some of them seem to be inter related. How do I drill down to the core KPIV’s ?
7. Using a C & E matrix we arrived at the top 5 KPIV’s using the Team’s experience.
8. We filled these top 5 KPIV’s into an FMEA . This didn’t seem to get us far as we did not have a numerical measure for some of the KPIV’s above.
9. Our RTY % varied from 15 to 50 % for different lines. We did a Pareto to arrive at the big hitting operations having a high rework % and then another Pareto to arrive at which defects occur the most in each of these operations. We are now working on the these.
Awaiting some answers to the questions above ? Thanks for your time.
Regards,
prasad0 
AuthorPosts
The forum ‘General’ is closed to new topics and replies.