# Arend

## Forum Replies Created

Viewing 23 posts - 1 through 23 (of 23 total)
• Author
Posts
• #127214

Arend
Participant

Dear Vidya,
Assuming that this is a study for process improvement and not for spying on colleagues, my best answer would be to tell you how I would appproach this myself. I would certaintly not start with dealing with this question only from a statistical point of view. I would start with going to the workplace, just observe the area and the operators, and chat with one or two of them. Then it is time to ask yourself a few questions:
1) Can you assume that the mix of percentages between the different tasks is stabile? For instance, is the operator working on a stabile process or in an ever changing environment? In case of the latter, sampling over longer time does not necessarily make the result better. And are the operators determining what the process does, or does the process determine what the operators do. If the operators determine the process, there can be more differences between operators than in the other case.
2) How many different tasks do you define? Almost every task can be divided into numerous subtasks, but the more tasks you define, the longer it will take before you are finished. It is a good idea to discuss this carefully with whoever ‘required’ you to do this work.
3) What resolution do you need for the proportion of time? Is your client interested in portions of 1%, or 10%, or …..? Without this question you will not know when enough is enough. Again, discuss with the client.
Next step is to start collecting data during a short period, calculate percentages, collect new data in a second short period, recalculate the percentages, and so on. I would start with at least ten samples. Then, make a plot of the sample number on the x-axis and the cumulative percentages on the y-axis.
Just look at the graph, and check if the percentages still show important changes with each new sample “Important” being defined as: judged against the question of how much resolution you need.
-If it doesn’t, it is now a good time to do a formal calculations of the percentages, with their confidence intervals and such. If these are not too large, you can start reporting.
-If changes have a random appearance that are large compared to your desired resolution, there is still too much noise in the data and more samples are needed.
-If changes show a trend, the percentages of time spent are not stabile, which is also usefull information that you can discuss with your client.
I hope this is helpfull for you!

0
#94718

Arend
Participant

Dear Pascal,
To me, your problem sounds very similar to: “I should make a widget, and my customer says I should use a hammer at all costs”. 6Sigma is a tool, and you should select the tool that is best suited for the job you need to do. If indeed your solution is already clear, don’t start pretending as if it isn’t (so I disagree with Divdar on this). Your customer already consulted you at SAP, which means he already wants to have a certain kind of solution.
An assumption from my side: your customer probably means that he want to have things done on the same level of a 6Sigma project, meaning: just as systematic, and just as data-based. It expect that you will end up using many of the tools of 6Sigma like QFD, counting defects, measuring the improvement, ensuring proper implementation of the solution, and so on. But as you describe it, the solution is already quite fixed.

0
#94089

Arend
Participant

Dear Herodotus,
Lets see if we can go a step further than disguising 6Sigma under a different name. You mention something very interesting, namely that you are in a pharmaceutical company that is quite heavily bound by dictated cGMP’s. This gives some things to think about. In most other industries we are much less limited in our way-of-working, other than by general safety issues and limitations by physics, chemistry and so on. It puts the Pharamceutical Industry in a special position. And that gives a good opportunity to make a system that works especially well in your industry.
It could be usefull to make a clear distinction between the tools, framework and the cult of 6Sigma. I tend to think of 6Sigma as a framework more than a toolbox. The majority of the tools already existed long before 6Sigma, and a 6Sigma project is still a 6Sigma project if you use the DMAIC / DMADV / DIDOV /….. frameworks with different tools. On top of that, 6Sigma also has its cult aspects like the belt status. It is important to have these, but the exact shape of these cult issues is irrelevant for the content of the work.
I think the key point of interest to adress is the framework of 6Sigma. I would try to think of a new framework that takes the limitations of the cGMP’s much better into account than the standard DMAIC/DMADV approach. For example, an FMEA could have extra columns for compliance to cGMP’s. Also, the control / validation steps might need to be beefed-up because an improved product / process would have to go through a much heavier validation process than what is pre-conceived in the standard approach. Just some controll charts don’t do the trick I think. You’re the expert in this case, so I am sure you can think of many more opportunities yourself.
You don’t seem to be able to get over the ego-inflation part of it yet, am I right? I think it really, really, really is a blessing if you know how to use it. Step 1: stop praising 6Sigma because it will do you and 6Sigma more harm than good. Step 2: recognize that 6Sigma is good but not invented with your industry in mind, so therefore also 6Sigma is open for improvement. Step 3: improve it to fit your industry’s needs and circumstances better. The more you have managed to tailorize it to your industry, the more good reasons you have for giving it a different name. Also, you could change the cult issues to something more specific for your industry. Step 4: collect the reward by being allowed to bask in the sun of your CEO’s ego (just a joke).
kind regards,
Arend

0
#94020

Arend
Participant

Dear Don,
Maybe there are even more shortcomings than the two I mention here, butI can think of at least these ones:

gage R&R is not about validating just the measurement tool, but validating a complete measurement system which is is more involving. In a good R&R study you should use parts that are within spec, just around spec and outside spec, to achieve results that are valid over the whole range of interest of your measurement.
With only one part in the survey it is not possibe to weigh the measurement spread agains the part-to-parts spread. This is relevant for checking if the gage can be used for selection between parts.
It doesn’t mean that repeated measurements of one part do not have any value at all. You can be sure that if the results are not good on only one part, they will probably be even less usable for the whole range of parts you will encounter. But good resutls on one part do not mean the measurement is usable on a wider scale.
I think other forum participants can think of even more shortcomings than what I mention here.
kind regards,
Arend

0
#94019

Arend
Participant

Dear Herodotus,
After giving it some thought, I think your situation is really not a problem but a unique opportunity. What you have on hand is a CEO who wants to implement a quality program, and has a very high identification level with it. Clearly he needs it to be a big succes. His key interest wouldn’t be to create a legacy that becomes a hopeless failure. This is a great opportunity for you to come in with your knowledge, as long as you manage to put aside your dislike about the possible ego-inflation part of it. More often than not, implementing a quality program is an uphill battle, but in your case it is not. Even better, you might just be the guy to realize his ‘ultimate personal goal’, which puts you in a perfect position to also realize yours if you are smart. I couldn’t think of a better possible win-win situation.
I agree with you that it might not be such a good idea to come up with a shiny new approach that nobody ever thought of before. Apart from being unrealisticly hard to achieve that, you probably bump into  acceptance problems, maybe even with the same CEO, because of a total lack of previous sucesses.
My advise would be to check carefully what the CEO’s dislikes about existing quality programs really are, so you can tailor the “new” program to overcome these objections and at the mean time keep all the good stuff from 6Sigma and other approaches. This goes much deeper than it might seem at first. A tailor-made program might just achieve more buy-in and support in the company than a standard program.
Even if it really is just ego inflation, so be it. As long as it helps in getting a good quality program that the CEO actively supports to be implemented, you are also a winner. Remember that a 6Sigma program will never ever work without top-level support.
kind regards,
Arend

0
#93972

Arend
Participant

Dear Donald,
I think the solution for how to do a baseline with non-normal data depends entirely on what sort of non-normality you are talking about. Is it a matter of many outliers ‘on top of’ a further normal distribution, or is the distribution essentially non-normal like for instance lognormal or exponential or even bimodal.
It could also depend on what aspect of the process you are interested in. For instance, I have recently dealt with a process that gave many outliers on top of a distribution that was normal (and very narrow and stabile). I decided to leave the normal distribution for what it was, because my interest was in solving the outliers. The baselining was done on the frequency of occurence of outliers.
If your process is continuous but essentially non-normal, a transformation might work very well to make it normal and do baselining in the conventional way.
kind regards,
Arend

0
#93866

Arend
Participant

3 (related) companies in Korea:
LG Electronics
LG.Philips Displays
LG.Philips LCD
All three have a wide scale implementation that is an intergral part of their business.

kind regards,
Arend

0
#93748

Arend
Participant

Dear Eliseo,Both are ‘real’ values, and both can be important:
-the% Manufacturing Process Variation is a measure for the repeatability and reproducability, judged against the variation in the process.
-the% Tolerance Variation is a measure for the repeatability and reproducability, judged against the tolerance (USL-LSL).The first one is important if you want to use the measurement to differentiate between products. Typically this would be if you want to do process improvement work, laboratory tests and so on.
The second is important if you want to judge the products against the specs. This is typically what you do in production testing.The data that you give show that the measurement is not suitable for testing differences between your products, but it is suitable for fail/pass decisions in production testing.So which of the two outcomes is important for you depends on the purpose of the measurement.I hope this was helpfull!kind regards,
Arend

0
#93673

Arend
Participant

Dear AberF,Actually Paul also discussed that between different process runs, the setters are taken out of the cart and put back in the same column, but not in the same row or depth as before. So only column is a constant, and the setters are changing positions every production run within their own column. Obvioulsy I would have taken it into account if it would have been differently. As you can see in my message, the Chi-square test has already been done per column and shows that only column 8 is different from the rest.kind regards,
Arend

0
#93669

Arend
Participant

Dear Stathem,I would like to bring one (often overlooked) point forward regarding the discussion of process performance versus required gage R&R. This point is that it really depends on the purpose of the measurement:
-when a measurement is used to compare performance against specification limits (tolerance) the R&R performance needs to be judged against these limits, not against the process. This is called the tolerance% in the R&R output results (at least in Minitab, that is). In this case, there is no link between the required R&R and the process performance.
-when a measurement is used to detect differences between products (being calls) , the R&R needs to be judged against the actual process spread. This is called study% in the R&R output results. In this case, there is a clear link between the process performance and the required R&R. This requirement usually is more severe; at least you may hope that your process spread is narrower than the tolerance ;-).I think it would be very good if mr. G explained a bit more about the purpose of the measurement for which he is checking the R&R and why he wants to do benchmarking. That would help in better answering the original question.kind regards,
Arend

0
#93668

Arend
Participant

Dear Paul,It took a while before I really figured it out, using all the information you gave. I’ll discuss in detail what I think the data are really saying.You have made an observation that there is a difference in the contact between setters and kiln car, so you distinguish two groups in the columns:
group A: kiln car columns 1, 3, 4, 6, 8, 9, 11, 13, 16
group B: kiln car columns 2, 5, 7, 10, 12, 15
For each colums you observed 1000 setters and counted the defects, and you want to know if these two groups are significantly different.For this kind of testing there are a few very usefull techniques, being difference testing for proportions and Chi-squared testing which is difference testing for many proportions. I have applied these two techniques in the analysis. For your work I think it is very usefull to study these two techniques, or at least know how to do these analyses in Minitab or another program.I am not in favour of the BoB versus WoW approach in the way that you propose to do it, for reasons that I will explain first. Testing the difference between these two groups is best done by comparing the total defect rates.
1) Compared to picking one colum from each group, it increases the number of observations and thus reduces the confidence interval width in your test. Said differently: the test becomes more sensitive for detecting differences.
2) By comparing the best column of what you assume is the good group (BoB), and the worse column of what you assume is the bad group (WoW), you are biasing the conclusion very much. In this way of working, you run a very big risk of coming to the conclusion that there is a difference while in fact there is none.
If you want to use the BOB versus WoW approach, I would use statistical testing to first confirm the idea that there really is a best and worst group. If this is confirmed, BoB versus WoW can be useful for collecting singals (candidate X’s) which you would then investigate further.Some analysis of the data you gave: In the ‘A’ group there are 364 defects in 10,000 setters while in the ‘B’ group there are 166 defects in 6000 setters. Doing a test on two proportions, you’ll find that the difference is significant (p-value 0.046). However, more inspection of the data shows that this difference comes almost entirely from column 8. If the analysis is done again without the data of this column, the significance entirely disappears (the p-value increases to 0.223) while within its own group it is likely to be an outlier (p-value 0.064 in a Chi-square test). Also, if the data of all columns except column 8 are put together, a Chi-square test shows that there is no significant difference between any of these columns (p-value 0.145).In conclusion I would say that column 8 is indeed a Worst-of-Worst column if you want to use the expression, but there is no ‘Best-of-Best’ column in your data.In your improvement project you will get some improvements by improving column 8, but in the bigger picture that will help you only so much. Where it could help is that this might identify some critical issues for cracked setters that might be usefull for the overall improvement that seems needed. And that is what you really wanted, isn’t it?I hope that this is usefull for you, and that you are sucessful in your further search for the important factors!Arend

0
#93594

Arend
Participant

Dear Paul,
I am not sure that I completely understand your process description, the data you have and the question you are asking to these data.

There are 45000 setters in the process, as you mention, distributed evenly over the columns. That means each column has about 2812 setters? Do you have a more precise count of the total number of setters per column?
And of all the setters in a column, 20 are active in the process (the 20 rows you mention?). Are these 20 different processes? Or are these 20 positions with the same process?
I get from your description that you are testing for a significant difference in the number of defect setters between the columns.

What do you mean with ‘I have found a difference in the support stands for the setters in columns 2,5,7,10,12 and 15 compared to the other columns.’
What test did you use to determine the significant difference you mention? Did you do a one-by-one comparison of the columns, or a Chi-square test?
I think I can help you better when you give some more details. I am familiar with the sort of statistical test that you are doing, but i know that without sufficient understanding it si easy to give you a wrong answer.
Arend

0
#93576

Arend
Participant

You already got a lot of good replies on your question, but from my own experience I would like to add one more: 6Sigma doesn’t work well if the rest of the ‘world’ doesn’t understand the approach you use. I think that especially management understanding and support for 6Sigma is essential. The last part of a project is usually the hardest to complete in such cases. Once the solution comes in sight, it takes much discipline to continue with structural working and closing off with a completed control phase. In non-6Sigma environments you’ll run a big risk that the support for your work quickly diminishes after finding and improvement, sometimes even before verifying it. There are many other risks, but I think this is a major one.kind regards,
ArendArend

0
#93571

Arend
Participant

Hello Dave,
This project is not taking place in my own working environment, but in an overseas factory in China where I am doing problem solving. There is progress in convincing the management members in question, by being polite and patient but sticking firmly on my standpoint. Also I use the persuasive powers of other managers who share my view. The thing is that in China you can never expect a manager to say that he has changed his mind, so for me it is good enough that I got a message that work is started to make the improvements.
Cultural issues aren’t often recognized in Six Sigma but may take a very important role in your work. For instance, I experienced that in China there is a lot of resistance against fact-finding. The reason is that if a deviation is found in an area, the people responsible for that area feel they get blamed. And often in fact they do get blamed, and this is the worst thing that can happen in Chinese business culture. This is something to take into account very seriously when doing a Six Sigma project.
On the other hand, the Korean people and culture are very, very homogeneous (I am a Dutchman working in Korea, and I’ve been on television here already three times just because I stand out). All wear the same clothes, drive the same cars of the same colors being white, black or grey. And 90% of the people share the same family names (Kim, Park or Lee). You’d be surprised how easy it is in Korea to minimize variation in a factory, given the natural allergy of Koreans against variation!
Arend

0
#93554

Arend
Participant

Gabriel,You’re right. That should teach me not to post late-night replies without reading twice ;-). Of course it is Control Limits. The real message, however, is that the fact that the operator is involved in the process doesn’t matter at all for the usability of the controll chart.Thanks for pointing out the blooper; let’s continue discussing the real question of Aidan instead!Arend

0
#93521

Arend
Participant

Dear Aidan,
The fact that there is an operator doing the process doesn’t matter for the usability of the control chart. Your controll chart will show how well the result of the process is in control compared to the specs. The operator is an intrinsic part of the process that you are studying.
Obviously, the operator could be one of the factors that could play a role when the process turns out not to be in controll, based on your controll chart. If there is a signal for an out-of-control proces, you could check if operator influences is an important factor.
What might be of concern for the usability of the control chart is the quality of the measurement that you use to collect your data. Have you remembered to validate this measurements’ calibration, resolution, R&R performance? If there is for instance a too large operator dependence in your measurement, that will make your control chart unusable. At least until you improve the measurement first.
Arend
Senior Development Engineer

0
#93464

Arend
Participant

Dear Brij,I also have a problem with management buy-in about quick improvements, but interestingly it is just the other way around. I am dealing with a product robustness problem of which the root cause(s) are not yet known. This work takes place in an environment with a very low working discipline. I am trying to get rid of some obvious errors in operator discipline (like throwing vulnerable components around!) that are clearly related with the subject. Apart from getting instantaneous improvements, this helps the actual root cause finding by getting rid of a lot of ‘noise’ and limit the number of possible causes. Much to my surprise and frustration, management says that since I haven’t found the root cause yet, they’re not going to support the easy improvements (even the parts mishandling!).So far for my own anecdote. Coming back to your issue: It looks like there is a tension between structural working and quick improvements, but I think there really isn’t. There is the concept of ‘low hanging fruit’ that can be picked quickly, and this improves the environment for the hard stuff. So my opinion about your problem is: do the obvious improvements quickly if possible. Of course, just should at least collect enough before- and after data to show the improvements you made. And if by then the target still isn’t met, start using the finer tools to tackle the hard parts. It will create more enthusiasm with your stakholders, too!

0
#93395

Arend
Participant

Dear Jackey,I guess from your description that there is one type of testing equipment that is used for in-line measuring, and the others are instruments for other purposes like laboratory measurements. Is that correct? In that case, the requirements for these testers probably are different:
-a tester that is used in comparing products against specifications must have a good R&R compared to the specification tolerances.
-a tester that is used in a laboratory probably is used for comparing products against eachother. If that is the case, the R&R must be good against the spread between the products. This is a more strict requirement.Your first concern is the production measurement, and the question if it can be used for rejecting and accepting products. In first instance I would focus on this measurement.You mention that the final test result is a good/bad judgment. It is possible to do an R&R study in such cases, but it doesn’t have the same strength of an R&R study on a continuous parameter. In many cases, the good/bad judgment is generated by a computer after after measuring the part and comparing to the spec. Is that also the case here? If that is so, my advise would be to ‘forget’ about the computer judgment and focus on the actual measurement in the R&R study.

0
#93360

Arend
Participant

Your first concern may not be about the ‘true value’, but which measurement is the better measurement. Calibrated equipment does not imply that the measurement result is reliable per se. Calibration is a property of the equipment but not of the measurement. The measurement also involves the operator (if it isn’t autometed), the parts, environmental conditions and many more issues that are not covered by calibration.
You might want to do an R&R study, to check if the results are sufficiently reproducible and repeatable. There is some good literature about this, a so I assume you’ll find your way. If not, just let me know.
kind regards,
Arend
Senior Development Engineer

0
#93355

Arend
Participant

Please also include:
-LG Electronics
-LG.Philips Displays
In South Korea, 6Sigma is really big. Some brands profile themselves as 6Sigma companies even to the end user in retail shops. Apparently it is a widely recognized issue, otherwise it wouldn’t work as marketing strategy.

0
#93247

Arend
Participant

Patrick,Just send your ‘first attempt’ to [email protected] quickly and I’ll send back any comments I have.Arend

0
#93201

Arend
Participant

Dear Patrick,Thanks for your much clearer explanation of what it really is you want to do, and I did misinterpret you. I take back some of my comments to you about business leadership, and so just consider them as general remarks. I think that as an airline pilot, you’ll probably won’t encounter controll charts as often as you will encounter turbulence. I’d step in an airplane anytime if I knew the pilot doesn’t know how to calculate LSL’s and USL’s. But as an airline pilot and as investor, you will make data-based decisions on a routine basis. So still I would try to pay more attention to mathematics.And no, you are not on your own. I already said I think you showed the right instinct for teamwork and so if Mike doesn’t help, I will. But you’ll have to do it like this: you compose the answer you think is right, with some explanations, and I or someone else will explain any mistakes you made but won’t give the right answer. It won’t be different as when you would have a brother who’s good in statistics, and a forum in which 6Sigma is promoted shouldn’t let someone like you down. That would really show that statisics guys are the wrong type of experts, the ones that don’t help when asked for. How about it? I see you have to file the answer by december 10 so we’ll manage.Arend

0
#93195

Arend
Participant

Dear Patrick,I see you really want to be a business leader, but I think you are overlooking some things on your path to success.Nowadays, by definition doing business means teamwork. By working in a team, each member’s weak points get compensated in the overall picture. You show have at least some of the right instincts for teamwork, because you search experts for help on an area where you are not good at and you don’t feel ashamed in saying you’re not good at it. You chose this over toiling endlessly on a topic that isn’t yours, and that is in my opinion a good and effective approach.Where I think you are missing the point is that you seem to expect that any company will offer you CEO-ship because you are a great guy with ambitions, who thiks he ‘belongs there’. We all think that! Your message even implies that you’re heading for the tob job, and we’re not. In reality, more and more businesses nowadays give better career opportunities to people with at least a green belt. Revenge of the nerds? Yes! ;-). Moreover, good leaders show appreciation for the people that are in their team, the more so if they ask for help in the first place.Another point for consideration: Probably, classes are made a pre-req because in the actual business college you’ll get much more of it. There must a good reason that statisitcs / mathematics is, and swimming or even golfing isn’t a pre-requisite. Just make sure and check what is in the curriculum of whatever school you ambition. You say in several postings that you want to do this class and be over with it. I think that’s not what you’ll get.lots of success!
Arend van Dam
Senior Development Engineer

0
Viewing 23 posts - 1 through 23 (of 23 total)