iSixSigma

Eileen

Forum Replies Created

Forum Replies Created

Viewing 61 posts - 1 through 61 (of 61 total)
  • Author
    Posts
  • #184251

    Eileen
    Participant

    Lawyer Directory

    0
    #139472

    Eileen
    Participant

    Jack,
    You can check out my consulting/training by googling “Quality Disciplines.”
    Most training I do is centered on specific problems and providing the appropriate methods as needed. The basis of the 8D’s problem solving is statistical thinking/methodology. Hence, it is assumed that you have implemented spc, measurement analyis, etc. This strategy was established based on a certain level of competence in statistical methods. In addition, it is also imperative for the management to be involved. What I run into lately is mostly companies with specific product problems that they just want them solved, usually ASAP. Often times the management is not part of the effort. Unfortuately, problem solving has to focus on those processes that allowed the problems to develop and then address the management systems that need to be improved. The problem is frequently just a symptom of a much greater concern.
    If you send me your email, I can send something your way without continuing to post on the forem.
    Thanks for your questions. Eileen
     
     
     

    0
    #139385

    Eileen
    Participant

    Jack,
    The 8D manual is copyrighted by Ford Motor Company. Copies are usually distributed in training classes. You might look for a class with a training company offering a problem solving class. I believe AIAG offers a problem solving manual without the training that should include some of the 8D material. Eileen
     

    0
    #92194

    Eileen
    Participant

    Wow! You are totally confused. Control charts are used for both enumerative and analytic studies ( See ASQ statistics division Newsletter).The distinction between these studies has nothing to with this discussion. And you are wrong about the quotes. They are not limited to an analytic situation.Oh, and by the way, an out-of-control chart has a special cause and this has nothing to do with enumerative vs. analytic studies.
    There are no alpha or beta errors or probabilities associated with statistical process control charts. Reread the quotes and the books by Dr. Shewhart/Dr. Deming.
    Your statements alone do not compare to the references by Dr. Shewhart and Dr. Deming.
     

    0
    #92167

    Eileen
    Participant

    It is sad that so many people do not understand the basis of statistical process control. Perhaps, people really don’t care and just view the details as unnecessary. But, I feel that only with a thorough understanding of the foundation of control charts will one be able to apply them correctly.
    I have included an excerpt from an article I wrote on this topic as well as a quote from Dr. Deming.
    There are no individuals with a greater understanding of the theory and development of statistical process control than Dr. Shewhart and Dr. Deming. For the few that are interested in learning, here are their thoughts on this topic.
    Dr. Shewhart’s definition of an assignable cause of variation is:
    “The principle function of the chart is to detect the presence of assignable causes. Let us try to get clear on just what this means from a practical and experimental viewpoint. We shall start with the phrase “assignable causes.” An assignable cause of variation as this term is used in quality control work is one that can be found by experiment without costing more than it is worth to find it. As thus defined, an assignable cause today might not be one tomorrow, because of a change in the economic factors of cost and value of finding the cause. Likewise, a criterion that would indicate an assignable cause when used for one production process is not necessarily a satisfactory criterion for some other processes.
    Obviously, there is no a priori, formal, and mathematical method of setting up a criterion that will indicate an assignable cause in any given case. Instead, the only way one can justify the use of any criterion is through extensive experience. The fact that the use of a given criterion must be justified on empirical grounds is emphasized here in order to avoid the confusion of such a criterion with a test of statistical significance.”4
    Control charts are not a mathematical test of hypothesis. Many authors and instructors of spc approach a control chart as a statistical test of hypotheses. This is also very misleading. Dr. Shewhart wrote: “As a background for the development of the operation of statistical control, the formal mathematical theory of testing a statistical hypothesis is of outstanding importance, but it would seem that we must continually keep in mind the fundamental difference between the formal theory of testing a statistical hypothesis and the empirical testing of hypotheses employed in the operation of statistical control.”5 Dr. Deming has said in his usual succinct manner “Rules for detection of special causes and for action on them are not tests of hypothesis that the system is in a stable state.” 6 Remember, there is not one mathematical model used in process control. There can be no assigned probability associated with the plotted points. Subsequently, without any probability distributions, there can be no statistical tests of hypotheses.
    References:
    4. Shewhart, W. A., Statistical Method from the Viewpoint of Quality Control (Graduate School of Agriculture, 1939) p. 30.
    5. Shewhart, W. A., Statistical Method from the Viewpoint of Quality Control (Graduate School of Agriculture, 1939) p. 40.
    6. Deming, W. Edwards, Out of the Crisis (Cambridge, MA: MIT Center for Advanced Engineering Study, 1986), p. 335

    In addition, Dr. Deming stated in his book “Out of the Crisis”, page 334
    Control limits do not set probabilities. The calculations that show where to place the control limits on a chart have their basis in the theory of probability. It would nevertheless be wrong to attach any particular figure to the probability that a statistical signal for detection of a special cause could be wrong, or that the chart could fail to send a signal when a special cause exists. The reason is that no process, except in artifucial demonstrations by use of random numbers, is steay, unwavering.”
    For those who are still reading, I strongly recommend you read Dr. Shewhart’s book and Dr. Deming’s book.
    Eileen

    0
    #92049

    Eileen
    Participant

    Alstats,
    Just to reinterate – there are no defined probabilities associated with points on a control chart. Depending on a calculation of probability for an out-of-control signal is wrong.
    Whether a shift in the process occurs depends on the process you are monitoring. What is the economic impact of a potential shift? How much does it cost to investigate a potential shift? How much would it impact your business if you missed the shift? Probability or mathematical equations do not answer your question.
    Remember, Dr. Shewhart defined a signal of an out-of-control condition is a nonrandom pattern on the control chart. Are you really looking at a process or just looking for something to put in teaching material? Eileen

    0
    #91871

    Eileen
    Participant

    Alstats,
    You need to consider a few things before defining what a special cause signal is on your control chart.
    1. Control charts do not have any definite probabilities associated with them.
    2. You need to consider the economics associated with the potential of a special cause and the identification of the cause.
    3. Dr. Shewhart (the developer of Economic Control) stated that a process is consider out of control when the chart exhibits a non-random pattern. In addition, consider the economics associated with it.
    For example, in some processes, a run of six points, seven points or even the magic 8 or 9 points is easily checked for a change in the process with little to no economic impact  incurred. A slight shift in the mean above or below the average is of little consequence. It is easily detected and corrected. So, whether you investigated at 8 points or 12 points, no big deal. However, in some cases, the economics are considerable. I’ve worked in companies when a run of 5 was a real red flag. The consequences of a potential shift in the mean was serious enough to begin investigating the process. It was worth the time and money, even if no special cause was found.
    These rules of out-of-control signals based on defined number of points or areas of probability on the control chart are very dangerous and are not based on the extensive theory and work of Dr. Shewhart. Most of  them came from individuals looking for a cookbook formula with little to no understanding of the fundamentals of control charting.
    Eileen

    0
    #91715

    Eileen
    Participant

    Tarek,
    A very good introductory book is “Quality Improvement through Planned Experimentation” by Ron Moen, Nolan, and Lloyd Provost.
    Classics which have proven themselves over time,
    Statistics for Experimenters” by Box, Hunter, and Hunter.
    “Statistical Methods” by Snedecor and Cochran.
    Eileen

    0
    #91712

    Eileen
    Participant

    May,
    What is your definition of “World class manufacturing” ? Depending on how you define it, Six Sigma may or may not be a factor.
    If you consider Motorola, Allied-Signal (Honeywell), or GE “World-Class”, perhaps Six Sigma can be a strategy to achieve it. If you strive to achieve the product level of defects of these companies, then Six Sigma may be the reason.
    Eileen

    0
    #91652

    Eileen
    Participant

    Brett,
    You can determine an estimate for the % defective. This is the same as defining capability for continuous data (Cp and Cpk).
    If you have a p-chart to establish process control, the best estimate for capability is 1-(p-bar) (converted to %). For example, if you have a p-bar of .035 from the control chart, then the process capability is 1-.035=.975 or 97.5% is your process capability.
    Another strategy was based on various sampling plans with a certain level of defectives. A good rule of thumb was to run 350 parts. If no defects were found then the capability was assumed to be “acceptable”. If you use this strategy, you need to define what you believe is the appropriate capability, and then develop the appropriate sampling. Of course, this says nothing about the stability of the process and should be viewed as only a preliminary capability assessment.
    Eileen

    0
    #91302

    Eileen
    Participant

    Mannu,
    Sorry, you are wrong. There are no definite probabilities associated with SPC. In addition, they are no tests of hypotheses. So, you cannot talk about a type I or alpha error nor a type II error (Beta).
    I would advise you to read Dr. Walter Shewhart’s book “Economic Control of Quality of Manufactured Product.” This will teach you the basic theory of SPC.
    Eileen
     

    0
    #91299

    Eileen
    Participant

    Miguel,
    You can’t use a probability value (.27%, for example) to define process stability. This is a very dangerous way to look at process stability.  If you have a process with many out-of-control signals, you need to work to identify those special causes of variation and worked to eliminate them as appropriate.
    Whether you are looking at the process in the short-term or long-term, it doesn’t matter. The theory is the same. You have to use SPC to identify instability and work to remove special causes of variation. So simple.
    Eileen
     

    0
    #90642

    Eileen
    Participant

    Miguel,
     
    You are making a few assumptions which may not be true.
    1. Control charts are not based on any definitive probabilitites. They are based on economics and a reasonable risk of finding a change in the process. There should be only random points within the control limits and no points outside the limits. If points are outside the limits, then it is reasonable to investigate those causes of change in the process. If you don’t want to do this, then why are you using the control chart?
    2. Control chart limits should only be recalculated when an improvement has been made. This means the process has shifted towards the target and/or the process variation has been reduced. If you recalcuate based on a time frame, you could end up with very wide limits which tell you little about the process.
    Eileen

    0
    #90141

    Eileen
    Participant

    Isabel,
    Don’t worry about Minitab at this point. You only have 6 data points. Too few to tell anything about a distribution. In addition, you have some real extreme points. What is going on with the 1000 and 1500 values? Are these from the same population or are there special causes in your data?
    Generally, lead time is a lognormal distribution. But, you have far too few values to attempt any type of distribution fit. Any number of distribution could be fitted to you data set.
    Eileen
     

    0
    #90113

    Eileen
    Participant

    Tom,
    You have asked your supplier for Cpk. You don’t say why. I am assuming you want to understand their defect rate or want to know the process distribution. The supplier referenced the automotive supplier book(AIAG) PPAP indicating that for a one-sided specification, these indices needed to be treated differently. That is true. A real concern with non-normal distributions which frequently occur with one-sided specifications.The statements in the PPAP manual do not excuse a supplier from determining the capability. It is merely suggesting that the commonly used indices Cp and Cpk may not be the best method for summarizing the capability.
    You need to decide for your company, how will you handle one-sided specifications. Several suggestions have been made to you. You can simply have your supplier calculate the percentage of the distribution outside of the specification, based on the appropriate statistical model. You may want a smaller percentage than what has been stated in these messages.
    Eileen

    0
    #88370

    Eileen
    Participant

    Unbelievable!! This thread is quite a commentary about the state of OEM suppliers. No wonder the defect rate on domestic cars is so poor.
    If your product is a liabiity issue and you are being asked for capability data, chances are you are already toast. Fudge the data, manipulate the data, change the data, you are already in the crosshairs. When I have had the chance to deal with a supplier like you, the consequences are quite severe. A hoard of locusts (process evaluators) will swarm on your facility and you WILL be desourced.
    Trying to find a way to trick your valued customer, you don’t deserve to be in business. Why not spend your energy and time making a better product? What is your product?
    Eileen
    Quality Disciplines
     

    0
    #87805

    Eileen
    Participant

    Mr. Sander,
    I would like to learn more about Shainin tools and method.  Please send me a copy of your user’s manual. 
    Email: [email protected]
    Thank you.

    0
    #87460

    Eileen
    Participant

    Michael,
    Go to the web site of American Statistical Assoc. You can at least order reprints or search their journals (?) for articles. You can’t read any of the articles online – as far as I know. For most technical articles, unless you are a paying member, you won’t be able to access online any articles.
    Good luck. Eileen-Quality Disciplines

    0
    #87457

    Eileen
    Participant

    Michael,
    The values of A2 for the average chart are based on the distribution of the average ranges(d2). A2 is equal to 3/[d2 *(Sqrt n )].Various articles exist for the development of d2.
    1. Journal of Amer. Stat.Assoc. (JASA) – Volume LIII -1958, p. 548 by A. J. Duncan.
    2. Industrial Quality Control – Volume XI No. 5 – Feb. 1955-A.J. Duncan.
    You can also check “Quality Control and Industrial Statistics” by Acheson J. Duncan or “Understanding Statistical Process Control” by Wheeler and Chambers.
    Eileen, Quality Disciplines
     
     

    0
    #87023

    Eileen
    Participant

    Vitto,
    Your concern is very common and a troublesome area of project work. I don’t believe it is as much an issue of using certain tools or methodology (control charts with control plans/QOS) as it is with organizational ownership. I agree with your concern.
    With any improved process, it is critical to have ownership by the management – executive level to local level. If they do not see it as critical or important, most processes will slip back to where they were unless extreme measures are taken to prevent this from occuring. Perhaps the question you should be asking is how to get the local management to take ownership.
    Generally, if the project is selected as critical to the business, most management will at least see this as important in the beginning. The more you can share with the management group as the project is progressing, the better. If they feel part of the solution and have some understanding of the processes issues, they are more likely to take ownership once the projected is done.
    Easy to say, not so easy to guarantee.
    Eileen, Quality Disciplines
     
     

    0
    #84006

    Eileen
    Participant

    Charles,
    Thanks for all your comments on this topic. I have being reading the thread with interest. I decided not to add any more comments because you clearly defined the issues. I have already had this discussion with the individuals involved. You won’t change their perspective – at least not in a couple of months.
    They really need to go back to read Dr. Shewhart’s book and learn the basis of the control chart – what it is and what it isn’t. The use of defined probabilities can’t be stated. Unfortunately, so many engineers read primarily engineering professor books. These books also do not understand Dr.Shewhart’s work. Perhaps, they just don’t have the basic education in the other disciplines to fully appreciate his work. But, by being adament and talking models doesn’t change the foundation work of Dr. Shewhart.
    Thanks again for adding your thoughts on this topic.
    Eileen, Quality Disciplines
     
     

    0
    #83847

    Eileen
    Participant

    As I said before. You need to read and study the work done by Dr. Shewhart. Since he developed and established the theory of the control chart, I feel he is more qualified on this topic than you. I am sorry, but you are so wrong.
    Eileen

    0
    #83837

    Eileen
    Participant

    Nonsense!! There are no probability values associated with control charts. There are no alpha and beta risks associated with control charts. Control charts are not tests of hypotheses. Read Walter Shewhart’s book – Economic Control of Quality of Manufactured Products and/or Dr. Deming’s “Out of the Crisis”
    Eileen
     

    0
    #83615

    Eileen
    Participant

    Jim,
    Thanks for your postings. I think you asked some really good questions in spite of your lack of sleep.Fundamentally, I think we are in agreement. Although statistics is useful as an aid to make judgements, it is not a substitute for good engineering knowledge. Of course we could use statistical models and make a boat load of assumptions and perhaps be able to better estimate the tolerances. Unfortunately, for transmission manufacturing, there are 8000 characteristics with about 1500 being critical. The process capabilities vary from marginally capable (1.33 – 1.66) to highly capable (5-8). These are not set in stone- they do vary somewhat with time. It was very important to remove unwanted sources of variation in the production process. Of course, there is the matter of economics and at some point you would have to stop. Some processes were fine at a Cpk of 2, others really needed to be higher. All processes do drift and the margin of error does continue to protect the product. Even with the 4 sigma or 4.5 sigma, because of the amount of components interfacing, it needed to be higher on some of the components.In addition, a lot of the processes are nonnormal which adds it’s own complexity.
    You are right – the processes were centered and stable prior to the calculation for capability – including the machine tryout and the potential study.
    I believe the Cpk was changed to Ppk within Ford to simply designate the two different studies. The was mostly likely done by a committee at some point – my money is on the Supplier Quality Assurance (SQA) group at Ford. To identify a single person (even if you could) I don’t think would shed any more light on this issue.
    Again, thanks for your comments and perspective on Ppk.
    Eileen, Quality Disciplines

    0
    #83591

    Eileen
    Participant

    Jim,
    Again, I think much of this has been abused and the method of statistical analysis has been twisted. The so-called long-term study was intended to study variation as it happened in production. Simple example is a lathe. During production runs you would experience tool variations, more raw material (bar stock) variation as well as maintenance and general entropy. We could not assess these sources of variation in a pre-production capability study.
    Both the machine tryout and the machine potential studies were lucky to have 20-30 parts. In most cases, these were taken in consecutive order and no sampling or subgrouping were possible. There was no way to take subgroups and execute a sampling plan. We were stuck. However, companies with processes that can spit out a lot of parts and subgroup components, it does make sense they should use the appropriate analysis. In transmission components, not possible.
    Whether you calculate the population sigma from a sample of 30 consecutive parts or from a subgroup on a control chart using R-bar, they are both estimates. For the production capability, control charts were used on the critical characteristics to assure stability over time and it was easy to use the R-bar (and it is more appropriate) to estimate the process variation.
    It is very interesting to see how other companies have struggled with this. In some companies and applications, it is not the best fit. The Livonia transmission plant for the new transmission did not make defects. All the processes were capable. The only issue were very small rejects at the test stands (less than .05%) due to tolerance stack ups. There was no focus on defect reduction because there weren’t any in the production process. The focus was to continue to reduce variation to improve the overall performance by achieving a product to target.
    Eileen Beachell, Quality Disciplines

    0
    #83586

    Eileen
    Participant

    Jim,
     
    I have read your postings on this topic with great interest.
     
    I agree with you that you can conduct a capability study and calculate the Cp and Cpk for that study. Perhaps, I can explain the use of the other notations associated with a capability study.
     
    The use of the different notations were used by Ford Motor Company. Specifically, the transmission division. Victor Kane and I started working in Ford’s transmission division in Livonia in 1984. At that time, Vic was working on an existing transmission and I was working on a new transmission about to be launched. For the new transmission, there were three major capability studies that would be conducted. The first study was conducted at the vendor’s site, usually on a machine. This is frequently referred to as a machine capability study. Once the machine passed this study, the machine was shipped and installed at the Livonia transmission plant. After installation, a second capability study was conducted to define the capability at the plant. Once production began, then a third capability study was conducted to understand the variation associated with the day to day production. It became clear fairly quickly, that there was a need to distinguish between the three types of capability studies. It was agreed upon by several groups of people that the following notation would be used:
     
    Cpt (Cpkt) for the machine tryout capability study
    Cpp (Cpkp) for the second capability assessment after installation but prior to production. This was called the machine potential study – hence the use of the p for potential
    CpL (CpkL) for the production or long-term capability study
     
    This provided a relatively easy way to understand the performance of the equipment across the various stages of usage. The indices notation helped to distinguish relative performance on a complex product (8000 characteristics) across numerous processes.
    As this approach spread across Ford and later was required of the supply chain, the machine tryout was essentially dropped. With the two remaining studies, it was simplified to distinguish the machine potential study as Pp (Ppk) from the production capability of Cp (Cpk). Of course, various companies and consultants have put their own spin on these indices. This has resulted in the general confusion associated with capability assessment.
     
    Eileen Beachell, Quality Disciplines

    0
    #83478

    Eileen
    Participant

    RadheshyamThank you for your kind email. Feel free to send me your issues and I will do my best to help.
    I tried to send you an email directly but the email came back saying the server was not found(?).
    My email is [email protected]
    Eileen BeachellQuality Disciplines

    0
    #83461

    Eileen
    Participant

    Radheshyam
    I think you raise a good point. My experience with cutting machines shows that more factors are involved than simply the tool. Although critical to it’s performance, other factors need to be investigated to achieve a high level of Cpk. Those factors will depend on the type of machine you are testing. Once you have selected the most appropriate tool (maximize performace and minimize cost), you may need to test various chip breakers, chip removal, speed rates, feed rates, coolant amt, etc. All of these varibles will have an effect on the capability. You need to consider for your machine what else will result in variation in the operation that will change the outcome and utimately Cpk.
    Eileen, Quality Disciplines

    0
    #83430

    Eileen
    Participant

    Scott,
    You really started my day off with a good laugh. Vic and I worked together for many years. He would enjoy this one! He had nothing to do with defining the k in Cpk. Although his paper on Capability Indices is excellent, the index has been around for many decades. It can be found in some AT & T material from the 50’s and 60’s. I do not know where it originated. However, the definition of Cpk is frequently defined as:
    Cpk = min 1/3(Zsub L, Z sub U)or min 1/3(lsl-xbar/sigma, usl-xbar/sigma)
    The z-statistic is defined as (x-xbar)/sigma. This quantity used to be defined as the k-statistic in the 40’s-50’s, later to be changed to z. Hence, the Cpk. Maybe, we should change it to Cpz?
    As for the Argentine Ford Material, it was originally written by Pete Jessup at Ford in North America, specifically Dearborn, Mich. Pete compiled existing material for the SPC manual. He did not define Cpk either. It is an index much older than either Vic or Pete.
    Eileen Beachell, Quality Disciplines
     
     
     

    0
    #82187

    Eileen
    Participant

    Scott,
    Good question but not a simple answer. There is a basic test called the mean ratio test. This test simply takes the ratio of the mean life of the test group over the control group.This hypothesis test, along with most of them in reliability, assumes only one known population slope. I am assuming you are using a Weibull distribution analysis. The larger the ratio, the greater the significance. There are various tables and graphs used to determine the level of significance. They are based on the sample size of the control group (n2) and the test group (n1). The degrees of freedom is found by multiplying (n2-1)(n1-1). In general, at a 95% level of significance, the ratio has to be above 2  with a slope above 1.5 to be significant at a 95% level. Another method for comparing the two life distributions is to graph the Weibull plot for both groups on a graph and place the 95% confidence bands around each Weibull plot. If the confidence bands do not overlap, then you can say you have a significant difference. This can be done easily in Minitab.
    Eileen, Quality Disciplines
     

    0
    #82080

    Eileen
    Participant

    Chad,
    This tape is a classic. It was done at Xerox. I was doing Six Sigma training at Xerox in 1999-2000 and had asked several classes about this tape. No one remember it. Finally, a grizzled engineer (or should I say seasoned) discussed the tape with me. It is no longer available from the company. They made me a copy of the tape and you may find it in video libraries. You may consider contacting Xerox to see if they would provide you with a copy. Although the clothing and hair is dated, I feel the tape still conveys may of the issues engineers face and provides a good overview of the problems people face as they implement DOE.
    Eileen, Quality Disciplines
     

    0
    #81169

    Eileen
    Participant

    Jon,
    It depends on what you have in the columns for severity and frequency of occurrence. The decision to take action is frequently tied to the RPN number. The higher the number, the more likely you should do something. If you have ones in severity and occurrence, then you probably don’t need to take any action. However, if it is a safety concern, I would certainly consider the consequences of having no detection.
    It is curious that you have no controls on your service or product. Does that mean that you have highly capable processes that don’t produce and errors or defects?
    Eileen, Quality Disciplines

    0
    #80918

    Eileen
    Participant

    Ron,
    Yikes! I wonder what Don Wheeler was thinking?
    The measures for skewness(asymetric) and kurtosis(peakness or slope of the sides of the distribution) are used extensively where process capabilities are determined. The most common method for determining the best model (or distribution) for your data is the four moments test (or versions thereof). The four moments – mean, variance, skewness, kurtosis- are calculated and then fitted to the closest mathematical function. A very popular method for fitting distributions to data sets is the Johnson curves, also based on the four moments.
    Many processes are not normally distributed. You need to know what function fits the data so you can correctly calculated Cp and Cpk or at least determine the expected percentage of the distribution outside the specifications.
    Eileen-Quality Disciplines
     

    0
    #80889

    Eileen
    Participant

    Zilgo,
    I really don’t agree with you. First of all, why do you think Jon is comparing means? It is more likely with a measurement system there is one single value. There are exceptions to this but I would guess this his situation. So how do you use a test of means to compare one individual value to the avg. of 3? This is clearly an abuse of statistics.
    You are so wrong about control charts being a statistical test. Control chart limits are economical and are placed at plus/minus 3 sample errors based on a lot more than mere probabilities. Shewhart’s book as well as Dr. Deming’s clearly talks about these issues. Control limits are not and never were probability limits and are not tests of hypotheses.(See Deming, “Out of the Crisis”, page 334-335).
    The fact that you only use control charts only for visual presentation says more about you that the tool.
    Eileen

    0
    #80869

    Eileen
    Participant

    Anon,
    Nope. Neither can you do any type of statistical analysis – including a t-test. Let me give my assumptions:
    This is a standard that is being measured or at least a golden part.
    There has already been a gage R & R conducted prior to actually using the gage
    AND there is already a record of measures on the standard which could be the basis for a control chart.
    What is YOUR recommendation?
    Eileen
     

    0
    #80866

    Eileen
    Participant

    Jon,
    Forget the t-test. Not appropriate. I had a discussion with Dr. Deming on this very issue. He was very adament that the most useful and appropriate method for the analysis is a statistical control chart. If the value 7 is outside the control limits, then you need to take action. Somthing has in fact changed. If the 7 is within the control limits, leave it alone.
    Eileen Beachell, Quality Disciplines
     

    0
    #80778

    Eileen
    Participant

    Most of the initial assessments for machine capability were started in automotive. There were general guidelines established for purchasing of new equipment. I think you need to think about the type of equipment you are purchasing and the appropriateness of the specifications. Consider this in terms of economics, competitiveness, and what the variation will do in your processing and product.
    In automotive, the intial requirements were for 1.33 with the expectation of achieving 1 in production. This was found to be inadequate. Later, the requirements changed to 2, with the expectation of meeting 1.66 in production. This generally related to a Cp or preliminary Pp. It is usually fairly easy to adjust the location. If this is not the case for your machine, you will need to also include the location index – Cpk or Ppk.
    In general, there are usually three studies that need to be done. The contract should be prorated. At Ford, we usually only paid 10% after completion of the initial capability study at the vendor’s site. If it met a Cp of 2, then the machine was shipped to the plant and a second evaluation of capability was conducted. This required stability (based on a control chart) as well as a Cp, Cpk of 2. This study usually took between 3-7 days.  A third study was conducted after production (after Job #1) and in full production. The expectation was to meet a minimum of Cp=Cpk=1.66 and process stability using control charts. This study usually took anywhere from 2 weeks to 3 months so that all possible variation from supplies, environment, measurement were included in the study. Engineers usually were assigned to support production for at least one year.
    Again, a lot of the details depend on the machine or process and the product you are trying to make.
    Eileen Beachell
    Quality Disciplines
     

    0
    #80609

    Eileen
    Participant

    No problem. In the most common DOE designs (where each factor has two levels), the response is assumed to be linear. A line is drawn between the low level effect and the high level effect for each factor and interaction. Sometimes, there is a question about whether the response may be non-linear (exponential, for example). If there is any doubt, it is wise to run what is frequently called a “center-point” (or as you said a middle point) design. This provides the ability to actually test for curvature or for a quadratic response.
    Eileen, Quality Disciplines
     

    0
    #80159

    Eileen
    Participant

    Khaljd This is an easy one. Yes you can use the standard gage r & r. The challenge is to find an alloy or material that is homogenous. You need to take paired samples (cut your dog bones next to each other). The first will be used as the first “measurement” and the second will be the second “measurement”. I have done this numerous times and have vastly improved properties testing. If you have multiple alloys, I recommend testing each category as a separate gage R & R.
    Eileen, Quality Disciplines
     
     

    0
    #77675

    Eileen
    Participant

    Cindy,
    I urge you to conduct a gage R & R on your flowmeters. They can be from a very good company and still have issues. I have consulted with companies and always insist on a precision (gage R & R) study on the flowmeters. They are very critical to performance and need to be viewed from that perspective.The study is most easily done off-line. Most companies have some fixture for testing flow (usually for calibrations checks). Using the off-line installation, select several flows to test and then conduct a repeatability study in a random order and study the repeatable of the meter. Select a range of flows that are commonly used in the location where the gage is installed. Don’t just test the recommended operating range of the purchased flowmeter. Test it for where and how you use it. I recommend you set up a periodic gage R & R procedure for assessing the flowmeters over time. If you have too many to handle, take a random selection.
    Eileen
    Quality Disciplines
     

    0
    #77651

    Eileen
    Participant

    Kofi,
    The intent of a process control plan is to control the product characteristics and the associated process variables to ensure capability (around the identified target or nominal) and stability of the product over time. The process FMEA is a document to identify the risks associated with something potentially going wrong (creating a defect – out of spec) in the production of the product.The FMEA identifies what controls are placed in the production process to catch any defects at various stages on the processing.
     
    Eileen
     

    0
    #75415

    Eileen
    Participant

    Carla,
    I’m not sure what you are asking. The product control plans usually focuses on the product characteristics (usually final). The process control plans drive the focus to the process variables that influence the product characteristics. The process control plans help to move the effort towards prevention vs. detection. I have templates of both as well as documents. If you send me your email address, we can talk outside this forum. My email is: [email protected]
    Eileen
    Quality Disciplines

    0
    #75151

    Eileen
    Participant

    Precision is the measurement variation due to repeatability variation plus the reproducibility variation (usually defined as variation across the operators). The so-called P/T ratio is defined as 6*sigma(sigma=sqr rt of the variance due to repeatability +variance due to reproducibility(R&R) divided by the product tolerance (USL-LSL).
    Precision to total is the ratio of the measurement precision(R&R) to the overall total variation, defined as the (R&R variation) plus the part variation.
    Eileen
    Quality Disciplines
     

    0
    #74594

    Eileen
    Participant

    Andie,
    For categorical data, you will want to use a nonparametric test. The best statistical test is the Kruskal-Wallis one-way analysis. This should do it for you.
    Eileen

    0
    #74245

    Eileen
    Participant

    K. W.,
    The answer to most of your questions is yes. For the left-censored data, you can use the the nonlinear fit platform. Under the survivial analysis you can easily use right-censored data. A column is identified for the censored data. This column in your data table has the censor code, zero for uncensored, non-zero for censored. I could not find any reference to the arbitary censoring or the interval censoring. The analysis does do confidence intervals for the parmeter estimates. It gives a statistical report for the Proportional Hazard Model. The report includes: Proportional Hazard fit, iteration history, Whole model (Chi-squared), Parmater Estimates, Risk Ratios, and Effect-likelihood ratio-tests. The basic analysis under the univariate survival command produces product-limit survival estimates, exploratory plots with optional parameter estiamtes, and a comparison of the survival curves when there is more than one group.
    Hope this helps. I use a package called Number Cruncher (NCSS) for my survival data. It does everything your looking for. You can download a demo copy from ncss.com
    Eileen
    Quality Disciplines
     

    0
    #74242

    Eileen
    Participant

    Cindy,
    Don’t use the paired t-test. The p-value won’t mean a thing!!
    If you have to use a test of statistical inference, your best bet is the Wilcoxan Sign-Rank Test (a nonparametric test of paired significance).
    This is the most valid test of statistical inference.
    Eileen
    Quality Disciplines

    0
    #73609

    Eileen
    Participant

    Dor,
    I can tell you about the word “containment” as used in Ford’s 8Ds/TOPS problem solving methodology. Contaiment is to assure that any defective products or defects on a product are not shipped/released. The intent is to prevent any customer issues – warranty, dealer reworks, unsatisfied customers. If a problem has been identified in your production area, the marginal product needs to be stopped from future processing or shipment.
    As I wrote in the “Team-Oriented Problem Solving” manual, pg.3D-1, “Once a problem has been described, immediate actions are to be taken to isolate the problem from the customer. In many cases, the customer should be notified of the problem. These actions are typically “Band-aid” fixes. Common containment actions are:
    100% sorting of components
    Product inspected before shipment
    Purchasing from an external supplier rather than in-house
    More frequent maintance (such as frequently tooling changes)
    Single source
    Unfortunately, most of these actions will add cost to the product. However, it is important to protect the customer from the problem until permanent corrective actions can be verified and implemented.”
    Hope this helps.
    Eileen Beachell, Quality Disciplines
     
     
     

    0
    #73483

    Eileen
    Participant

    Max,
    In the US and Japan, the concept of reducing variation was heavily promoted by Dr. Deming’s lectures and books in the early 1980’s(US). He was a disciple of Dr.Walter Shewhart. As Dr. Deming stated: “Shrink, Shrink, Shrink that variation.” This was way before Six Sigma.
    Eileen
     
     

    0
    #73304

    Eileen
    Participant

    Gerald,
    I am willing to share my ideas of this with you. Send your concepts to [email protected]
    Thanks. Eileen

    0
    #73010

    Eileen
    Participant

    Mariano,
     
    From my perspective, I would use the notation of Pp for a machine or process start-up (before production). For the studies I have conducted I always make certain the output is stable over time. If they are not, I would not accept the equipment. Something is wrong if the machine tryout is unstable under such a controlled environment. I had a lathe to send for a transmission carrier hub. The lathe was very unstable during the machine tryout. The machine builder had not optimized the lathe. There were issues with amount of coolant, tooling, speed and feed rates to name a few. We ended up having to do several design of experiments to correct the issues with the machine. Once the setup was optimized, the machine output was very stable and a Pp (and Ppk) were calculated. I would use the designation Pp regardless of the formula used to calculate the preliminary capability.
     
    I usually do a simple run chart when I am conducting the study. Once I have completed the run, I construct an individual and moving range on the data. If I have a high volume output, then I use the X-bar and range chart. You are correct that this type of study can’t be long enough to cover all the factors that can influence the machinery – such as seasonality. However, it does work very well.
     
    If your process is not a normal distribution, you have a couple choices. (I won’t reiterate what has already been posted). First, you can try to transform the data to achieve normality. Then you just follow the typical calculations. Software can help with this.
     
    Secondly, for the variation index (Pp or Cp), you can estimate the spread of the distribution with 99.73% within the limits of the appropriate distribution (such as, exponential). Once you calculate the range, use this to replace 6-sigma. For the Cpk, I just report the percentage of the process distribution outside the closest specification limit. You can then equate that to an equivalent capability index of a normal distribution.
    Again, a good software package can do this.
     
    A very good article on Process Capability Indices is by Victor E. Kane in the Journal of Quality Technology (ASQ) Volume 18, No. 1, January 1986. He gives the following caution: “There is a tendancy to want to know the capability of a process before statistical control has been established. Capability refers to a quantification of common cause variation and what can be expected from a process in the future. The presence of special causes of variation make prediction impossible and the meaning of capability index is unclear. “ I believe this also applies to the preliminary studies as well as the so-called process performance.
     
    Another caution: Do not use probability associated with points in and out of control charts. The control limits are not probability limits.
     
    Eileen
    Quality Disciplines
     

    0
    #72950

    Eileen
    Participant

    Gabriel,
     
    Let me try to explain what QS-9000 is referring to regarding capability.
     
    Section 4.9.2 relates to the pre-production studies required for Pre-Production Approval Process (PPAP). This is not meant for an existing process. Nor does this have anything to do with the SPC manual which is intended for existing production processes. The intent of PPAP is for conducting variation analysis on the machines prior to installation on the factory floor (ideal) or just after installation in the factory before production begins. Again, this is not to be used for an existing production process. The original use of the term Ppk was used only in this application. It was intended to demonstrate the preliminary or potential capability of the equipment prior to production.
     
    Section 4.9.3 relates to the production process. In most of Ford’s original material, the Cpk index was always referred to as the “Performance Index” since it uses both the process location, average, and the process variation. Again, at that time there was never a capability assessment on an unstable process as defined in the SPC manual. Most of Ford’s material was used to create the AIAG quality manuals and the automotive requirement for QS-9000.
     
    It seems to me that you are using the confidence intervals on the performance index for batch sorting. You are correct that all the statistical analysis is appropriate – including the calculations and usage of confidence intervals. You said:
     
    If in a sample of 150 pieces I find a Pp = 1.5, I am 95% confident that the Pp of the population is somewhere between 1.3 and 1.7. If I find a Pp = 1.5 but in a sample of 10 pieces, I am 95% confident that the Pp of the population is somewhere between 0.8 and 2.2.
     
    So, would you ship this product to an OEM knowing that the lower performance index confidence limit is 1.3 ?
     
    (One final note on the production “performance index”)
     
    Although it appears on two pages in the SPC manual and people teaching the manual cover this concept, it is in conflict with previous and existing materials. I know that none of my clients are following or using that concept – at least not in my presence. I would advise companies to ignore it. It is nonsense to use this on production processes. I understand how you are using it in your company. How is this really working for you? It seems with the amount of instability in your process, that you are simply chasing your processes month to month. You may never achieve any sense of real stability in your product. This certainly guarantees job security for the Quality department.
     
    I asked you about your requirements for the process capability. You repeated the minimums specified in the OEM requirements. Is that all you are working to achieve? For the AXOD Transmission, there were 8000 dimensions – 1800 were critical characteristics. The average Cpk was 4. In precision manufacturing – such as grinding operations for ball bearings, a Cpk of 1.33 is not acceptable for producing a high quality (manufactured at target-not just within spec) product. It is only a minimum to show compliance to engineering specifications.
     
    Are you only trying to achieve capability on the automotive products? Perhaps that is why you are relying on the application of the so-called performance index to define capability for batches after production. Sounds like you are trying to inspect quality into a finished part.
     
    Perhaps you should take a step back from all the requirements and think about what you would like to achieve with the product and the processes. Does it help you to look at the gap between the performance index and the capability index? Why do you even need to use the performance index? Why are you unable to keep your production processes stable? Are you only using SPC on the automotive OEM part numbers? If so, why? If you really what to satisfy your customers (all of them), focus on stabilizing the processes. Why do you have so many special causes? Can’t you remove them from the process? What are you doing to continuously improve your processes – even those that have a Cpk of 1.33?
    Hope this helps.
    Eileen
    Quality Disciplines

    0
    #72896

    Eileen
    Participant

    Dear Gabriel,
     
    Thanks for the post. Now, I understand your perspective on the variation studies. Just to make certain, let me re-phrase what you have stated. (I reread the AIAG SPC manual).
     
    The performance study (Pp) includes the total variation in the index for Pp. This variation includes both common and special causes of variation. As a result of this definition, the process could be very unstable.
    The  “Process Capability” (Cp) is estimated from only a stable process – no special causes on the control chart.
     
    I think you need to look at the history and intent of the use of these indices to achieve high quality products – low variation centered around the target. The AIAG manual was taken from the Ford Motor Co. manual titled “Continuing Process Control and Process Capability Improvement” written by Pete Jessup in 1984. The intent of the manual was to help Ford and their suppliers understand and apply SPC to production processes and to assess capability associated with a stable control chart created from typical production activity. There was no distinction between performance (or preliminary for that matter) and this capability study. The assumption was that until a process was brought into a state of control, there was no production assessment of capability. Even by doing the number crunching and calculating a Pp on an unstable process, it really is of little value (my opinion).
     
    In 1984, during the launch of a new transmission for  Ford (AXOD), a decision was made to make certain that all machines purchased for making the parts would be capable of performing. I did extensive work on this effort. We made a decision to perform “Machine Capability” studies, sometimes called potential studies on the machine at the machine builder’s facilities. At that time, we had written in the contract that this assessment would have to meet a Cp of 1.33. Once the machine passed this capability requirement, it was shipped into the production plant and reassessed for capability. In this case, it was called a short-term or pre-production capability study – again, no Pp, just Cp. The goal was to achieve a capability of 1 during routine production. So, once in production, the process was assessed again for what was referred to as “long-term” capability – still Cp. The real goal was to ensure at every critical engineering phase, that we could proceed with some level of confidence to the next stage.  Upon the completion of a very successful launch, the decision was made to use this approach for the launch of a new engine – called the Romeo engine. In this case, the machine builders were invited to a meeting before bids were placed and were told that they would have to meet a machine capability or preliminary capability of 2. This was based on our experience with the AXOD machines. The expectation for the pre-production was 1.67, and for long-term capability of 1.33. Again, there was never any reference to the performance concept. This strategy is really the basis of the material in the AIAG manual “Production Part Approval Process.” The use of the preliminary process capability studies are defined on page 7.
     
    So, now the question, why was material on this performance study added to the AIAG manual. Who knows? I don’t. It doesn’t make any sense to me. If I made a guess, I suspected that it was added to force suppliers to understand the influence of the special causes on the capability even without stability. I suspect a lot of SQA engineers heard from their suppliers that the processes had special causes and they could not calculated capability – hence the performance assessment. In the AIAG book, they make the following comment on it’s usage: It should be used to compare to or with Cp and to measure and prioritize improvement over time. So, is it being used when the process becomes unstable and to try to work back to the original stability of the process? If the process is unstable, the calculated index is meaningless. It doesn’t reflect anything about the process – even the assessment of the history. The greater the gap, the more priority it should be given?
     
    As for the statistical debate, my perspective is different. Dr. Deming was my mentor for 12 years. Although trained as a classical statistician, I view most statistical application from Dr. Deming’s perspective. In this case, what does that mean. First, there is no such thing as a “True value”.  Everything is an estimate. Most of the value in capability assessment is in the prediction. I don’t see the relevance of the confidence intervals on the estimates. I believe this because it is an analytic study not an enumerative study (prediction vs. history). If you are inclined to use confidence intervals, go ahead. I am curious with how you may use the information and at what level you chose? (Classical side coming out of me).
     
    You are fortunate to work for SKF. I was offered a job with them in Europe to work on quality in the late 1980’s. They are well ahead of most companies in the use and implementation of SPC and other statistical methods. Perhaps you should ask Chris Hellestrand (a student of George Box’s) how he views the purpose and intent of the performance index.
    You are correct in stating that once the special causes are removed from the process, it should be close to the Cp. It does make me wonder just exactly how you are using and interpreting the performance indices.
     
    Out of curiosity, is your goal 1.33 for process capability? If so, why? I know how and why it was selected by Ford, but how did your company choose to use it.
     
    Your  English is excellent and I have no problem understanding it.
     
    Sincerely,
     
    Eileen
    Quality Disciplines
     

    0
    #72823

    Eileen
    Participant

    Gabriel
    Your comments on the Pp/Cp are somewhat different from my experience and from the Six Sigma material on capability by Harry.  My perspective on this topic was formed in the 1980’s when I worked on numerous process capability studies in the automotive industry. So, there are some aspects of capability that we will just have to agree to disagree.
    We do agree on the following:
     
    1. A process can be assessed for short-term and long-term capability.
     
    2. The short-term capability or performance will be better than the long-term capability.
     
    3. The short-term capability is an assessment of what happened at the time of the study. It does not have the predictive ability of the long-term study. (I have no disagreement if you prefer to call it performance.)
     
    4. A process capability study cannot be predictive without the verification of control via statistical process control. I agree that the long-term study does truly reflect the real capability of the process. Again, if you prefer to call this the Capability study, that’s fine by me.
     
    I’m a little puzzled why you believe that you need not demonstrate control during the short-term study. I know a lot of material doesn’t require it but it seems this is a bit risky. If any special causes do show up in the short-term study (or Performance), you will have a very unstable process that will need some attention. In other words, you don’t have a chance in your capability study.
     
    I don’t believe that a capability study will have the same results as the short-term or performance study. Most of my short-term studies are done on brand new equipment and production lines. The studies are frequently conducted by engineers and very skilled technicians. There is much less variation in this assessment (your performance study) than in the actual long-term capability study. Entropy will come in and effect the process. Perhaps this is not when you do your performance assessments.
     
    By the way, what calculations are you using in your performance and capability studies?
     
    Eileen
    Quality Disciplines
     
     
     

    0
    #72793

    Eileen
    Participant

    Ricardo,This is not an easy one. You need to decide what the visioning system is doing. Many of these devices will place the part into one of several categories. As an example, they may count number of scratches, stains, etc. Depending on the count, they are determined to be “good” or “bad”. If you are working with an inspection system, make certain you understand the algorithm used in classifing the component as good or bad. Frequently, the data will be in some type of category – scaled data from, say, 1- to 5.Then it is simply reported as good or bad. There is usually a lot more to the data than a binary response.Next, you need to do a concordance test to assure that the on-line inspection system is in fact recording correctly what the defect actually would be based on a master method – usually a very skilled/experience person. Take some samples, have them read by the visioning system and then by your master method. They should agree. The level of agreement (or concordance) can be measured.Next, you need to worry about its ability to repeatably give the same classification for the components.Again, these studies are based more on statistical methods used for categorical data. They use a variety of statistics – Kappa, Kendall’s W ,etc.The actually statistic will be based on what the real raw data looks like.The Attribute Gage R&R is based on a binomial distribution . This is not really helpful in studying this type of measurement system.EileenQuality Disciplines

    0
    #72738

    Eileen
    Participant

    You can’t. The control limits are what they are. The fact that you have very many out-of-control points, then your process is unstable and may produce defects in the near future. Even if the out-of-control points are still well within specification, you need to decide if it is economically viable to continue searching and removing the special causes in this process. Only you can determine if is worth the cost to your process.
    Eileen
    Quality Disciplines

    0
    #72695

    Eileen
    Participant

    Pat,
    I too teach Six Sigma classes. Several friends also teach the Master Black Belt classes for GE. I have a master’s degree in statistics. I understand the difference in the estimates of standard deviation.
    The difference has occured based on material out of Mikel Harry’s teachings. Your statements are correct and are a nice summary of teachings found in the Harry and others materials.
    All of these are interpretations of the work done at Ford in the 1980’s. Whether you are able to use a control chart on a new piece of machinery and estimate the capability of the process from R-bar/d2, many in automotive will define this as potential or preliminary study. It doesn’t  matter how you estimate the variation. You can use a small sample size of 30 or you can use a control chart with 100 samples. It is still trying to understand the inherent variation in the equipment and define it in some preliminary way. The time frame is still relatively short. Hence, short-term capability.
    The long-term capability allows for as much variation that is in the entire process to be assessed. This will include the inherent machine variation, the raw materials, the environment, measurement, etc. The purpose is to capture the variation and quantify it. I know the Six Sigma materials spends a great deal of effort regarding the shifting and then leading up to the big shift of 1.5 sigma. Enough has already been said about this, but I have not seen processes behave in this way. Harry’s reference is from the 1950’s and is very controversial.
    I know that Minitab – Six Sigma version agrees with your explaination but not all software does. Nor do all companies.
    That’s my one cent. EB
     

    0
    #72642

    Eileen
    Participant

    Chaz,
    Thanks for your clarification. I agree with you. Nice to hear a voice of reason.
    The GE version of Minitab for Six Sigma Process report does use just the opposite for Cp and Pp. Here is how Minitab uses the indices:
    Explanation of the Cp and Pp from Minitab for GE Six Sigma
     
    Note: Cp and Cpk represent the potential (short-term) capability of the process. Therefore, these formulas assume that the process is centered on the target or on the midpoint between specs (since you can assume that this is possible to do). Cp and Cpk use short-term variability.
     
    Note: Pp and Ppk represent the actual (long-term) capability of the process. Therefore, these formulas do not assume that the process is centered on the target or on the midpoint between specs. Pp and Ppk use long-term variability.

    Here lies the source of much confusion.
    Eileen
    Quality Disciplines
     

    0
    #72627

    Eileen
    Participant

    Peter,
    Your question is “What does the k stand for?” When you calculate the distance the average is from the closest specification limit, you use a z-score. That is, you calculate the distance of the average from the spec limit in units of the std. dev. (take the distance and divide by sigma). Then the minimum z value is divided by 3. These normal z-scores are used extensively in statistical analysis.
    This calculation used to be referred to as a k-value (1950’s?) (later referred to as z). Hence the confusion. Maybe we should start calling it Cpz ??
    Eileen Beachell, Quality Disciplines

    0
    #72620

    Eileen
    Participant

    Dave,
    No problem. My references are from automotive – the main source of these concepts for the past 20 years. Ford did extensive process capability studies in the 1980’s. There was no real distinction between the two studies. Initial studies were called pre-production or potential capability and still used the Cp designation. Production or long-term studies also used the Cp designation. In the early 1990’s this was changed. The preliminary or pre-production capability studies were to be designated as Pp or Ppk. This is clearly documented in the AIAG manual – Production part approval process on page 7, paragraph 2. It states “Preliminary process studies are short-term and will not predict the effects of time and variation, etc.”
    Since Davis Boothe is out of Detroit Diesel (GM), perhaps they had their own way. Hard to say why Harry and company presented these concepts they way they have in the Six Sigma material.
    Eileen

    0
    #72617

    Eileen
    Participant

    Stan,
    Don’t be rude!
    I am speaking for automotive. The use of capabilities was used extensively during the 1980’s without any designation between preliminary studies (pre-production) and production capability (long-term) capability studies. In 1993, the AIAG group published PPAP booklet, which on page 7 clearly showns the use of the P designation for the preliminary studies. I quote ” Preliminary process studies are short-term and will not predict the effects of time and variation, etc.”
    In most cases, the preliminary capabilities(short term Pp) will be less than the production capabilities (long-term Cp).
    The confusion has come along with the current Six Sigma material written by Harry, et. al. There is considerable debate about the usage of the terms of short-term and long-term. My response is based on the source of Harry’s work from Ford Motor company.
    We all are entitled to are own perspective of these concepts – especially if you have worked on process capability studies for 20 years.
    EB

    0
    #72601

    Eileen
    Participant

    The difference between Cp and Pp is related to what data you are using. Typically, when a process is new, a preliminary capability is conducted. Any statistical assessment of capability on a new process/product based on a small sample size will be Pp and Ppk. Again, the P stands for preliminary. The capability assessment is done in a start-up situation, frequently with engineers running the equipment, limited raw materials, and is a much more controlled assessment of the process variation. The time interval is usually only an hour or two. Once the process is running in normal production mode, then a capability assessment should be made. This “long-term” capability will allow variation from the different operators, different lots of materials, and all the normal type of variation that will effect the process during normal production periods. This data is most often collected on a statistical control chart and is usually over several days or longer. This capability assessment will use the Cp and Cpk designation.
    Eileen
    Quality Disciplines

    0
Viewing 61 posts - 1 through 61 (of 61 total)