iSixSigma

Dimension Spec. CPK analysis

Six Sigma – iSixSigma Forums Old Forums General Dimension Spec. CPK analysis

Viewing 76 posts - 1 through 76 (of 76 total)
  • Author
    Posts
  • #49697

    HECTOR
    Participant

    Hi People, I have to perform a CPK analysis for a dimension, the spec. is 398 mm ±3 mm, all the data is within spec. but the CPK is lower that 1, can anyone tell me if there is an alternative to this study, is there a different type for this situations, I think this is because the tolerance is to narrow but Im not sure. I am making this with a 125 samples.

    0
    #170508

    LMendes
    Participant

    Hi, For CPK studies you need specification limits, mean and standard deviation. You specification limits are very tight, around 1-2% of your spec.. Also you resolution is low compared to you spec.(1mm). So, the only way you could have a good CPK is having a very tight normal distribution. First you should analyse your values to check for normality. Then, breakdown into possible periods or particular situations (like shifts, materials, operator, etc – you can have a bimodal distrib). Then, improve measuring system to have better resolution. Last, review specification limits (if possible)
     

    0
    #170509

    A.P.Raja
    Participant

    Hi Hector,
    Your problem is most identical in manufacturing sector.
    Based on your CPk value, the standard deviation (Shift around the mean) is a vital factor for improving the CPk value.Reduce the variation around the mean, it will automatically increase your CPk value. Resolution and MSA of measuring instrument to be ensured. 
    Tolearnce is not a crucial factor unless your machine capability is more that 1.67.
     

    0
    #170511

    Waleed
    Member

    You do not have a normal distribution, which is strange for a measurement parameter. 
    You need to check your measurement gage resolution and accuracya.  You need to perform Gage R&R on this measurement instrument.
    Good luck,
    Waleed
     

    0
    #170512

    SJ
    Member

    Hector,
    Post your data and I’ll tell you why your Cpk is 1 and I may be able what can you do to get better results.
    Usually this happens when the location of the average value of your distribution is not closer to one of the specification limits.
     

    0
    #170519

    Brian M
    Participant

    That the first good answer I’ve seen yet.Without the data, no one can tell you why, but I’m leaning towards SJ’s response. If all your data are on the low or high limit, your CPk will stink even though all of your parts are in tolerance.Brian

    0
    #170521

    Vinnie
    Member

    Hector never mentioned anything about resolution. Where did you come up with this?

    0
    #170522

    DaveS
    Participant

    Hector,
    You do need to provide the data if you expect anyone to help you.
    There is a great tendency to want to immediately question the spcification when capability is low.
    My advice?
    Man-up, bucko! Dig into the data and the process and find out what is the root of the low Cpk.
    Are you not centered? Look at the data.
    Does your measurement system not “measure up”? Look at the gage R&R.
    Is the distribution not normal? If it is not, and the process cannot produce normality, there are non-normal assessments.
    How are you calculating Cpk? If you are not utilizing a canned program like Minitab, you likely are calculating Ppk.
    If you find that you cannot improve the process center and/or variation and feel you are at the limits of your technology and investing in a better technology isprohibitive than and only then re-examine the spcifications against the VOC to see if you can get relief.
     
     

    0
    #170523

    Vinnie
    Member

    What? How do you know the distribution isn’t normal? Hector didn’t provide enough information for anyone to determine that.
    “You do not have a normal distribution, which is strange for a measurement parameter.” Please explain. Do you mean if I measured parts from a non-normal distribution, the data should be normal?
    I do agree that Hector needs to use a gage with adequate resolution and that a GR&R study is needed.

    0
    #170524

    Vinnie
    Member

    Bravo. Well said.

    0
    #170526

    DaveS
    Participant

    LMendes,
    I had gotten a bit surly with a poster a few days ago and I promised myself to be a kinder, gentler sort of Daves.
    So, I will try not to flame at you.
    However, can you provide any rationale for your statement?
    “You specification limits are very tight, around 1-2% of your spec.”
    I am curious as to how you can say that without knowing anything about the process. The tolerance band as a % of mean has zero relationship to the Cpk. Let me provide an example for your consideration.
    Suppose we are producing a football field (American), 300 feet long. We want it to +/- 1% of mean. Suppose we lay it out with a calibrated steel tape 300 feet long and measure it with GPS guided survey equipment. Do you think we could not produce these to a good Cpk? That is, within +/- 3 feet?
    On the other hand suppose we are cutting extruded soft rubber coils into 300 mm chunks. We mark it with a piece of chalk, cut it with an axe and measure it with a micrometer. We most likely could not hold it to a good Cpk if the limits were +/- 50 mm.
    The Cpk depends on the relationship of the specification width to the process capability, not to the relationship of the spec limits to the mean value of the target.
     I think someone else has questioned your statement about the resolution. I agree. How can you possibly know?
     

    0
    #170534

    Ron
    Member

    Cpk is a measure of how centered your process is. If your Cpk is below one it simply means that your process however tight is not centered.
    Action: Have maintenance center the process.
    What is the Ppk forthis process?

    0
    #170539

    DaveS
    Participant

    Ron,
    This thread is producing an amazing number of low value posts.
    In keeping with my kinder/gentler approach I will simply say that your statement:
    “Cpk is a measure of how centered your process is. If your Cpk is below one it simply means that your process however tight is not centered.”
    is completely without merit.
    Cpk is NOT simply a measure of how centered your process is
    Cpk IS a measure of the capability of the process with CONSIDERATION of the centering.
    Please consider the process with specification 5 to 15. The mean of the data =10, sd =5. This process is centered so Cpk-upper = Cpk-lower = 0.33.
    Well centered but not capable.
    How do you view such a process in light of your assertion?
     
     

    0
    #170543

    Kumar
    Participant

    Hector,
    Did you check the difference between cp and cpk…if the gap is high, you might be in luck as this is due to assignable causes.  Data within spec has limited impact on cp/cpk as it assumes normal distribution..If you data us uniformly distributed (for example) then cp value will be better…I think you have shift going on…but I can not say without the data.
     
    rgds
    -ravi Pandey

    0
    #170544

    Forrest W. Breyfogle III
    Member

    There is much confusion about process capability indices calculations.  For example, in your case you have a two sided specification.  Because of this, you need to include both Cp and Cpk to adequately describe process capability.  Cp addresses spread, while Cpk addresses centering.  In addition, you talked about 125 samples.  Was the sequence of production preserved when entering this data into a statistical analysis program; e.g., Minitab? If not, a Cp and Cpk determination is not appropriate, if you are using a software package like Minitab.  Cp and Cpk in Minitab are considered “short-term capability.”  The moving range mean between adjacent subgroups (in a column of data) is used when making the calculation for standard deviation in the Cp and Cpk calculation formula.  Hence, if you did not preserve the sequence of manufacture, then the Cp and Cpk output from a Minitab calculation has no short-term meaning.  Only Pp and Ppk from the Minitab calculation would be appropriate (i.e., long term process performance assessment).  To prove this point, change the order in which data is inputted in a Minitab column when making this calculation.  In all likelihood, the exact same answer will not appear for Cp and Cpk. There are other issues too.  Are these samples a true representation of the population of interest; i.e., they should be collected randomly over time — not “today’s batch”?  Are the data normally distributed?  Are the samples from a stable process? In addition, calculated values for Cp, Cpk, Pp, and Ppk can be very dependent upon how someone selects samples from the process; e.g., were samples determined from a process that sampled the process using an individuals control chart or an x-bar and R chart.  Does this all sound confusing?  Agreed, it is.  That is the reason I discourage using Cp, Cp, Pp, and Ppk.  A 30,000-foot-level measurement system is a much better approach.   I realize that customers often ask for Cp, Cpk, Pp, and Ppk metrics.  For these situations I suggest that you provide them these indices but in addition show them a 30,000-foot-level report out.  In a 30,000-foot-level report out, the process is first assessed for predictability and then, if the process is predictable, a no-nonsense prediction statement is made (e.g., approximate percentage defective). In time, your customer will see how the 30,000-foot-level report-out is more beneficial to them that a Cp, Cpk, Pp, and Ppk calculation.  I call this stealth training.  There are several Quality Progress published 30,000-foot-level articles for various data types.  If you you would like for me to look at your data set let me know.  Forrest Breyfogle

    0
    #170550

    Vinnie
    Member

    Daves,
    I think all LMendes did was divide the tolerance spread (6mm) by the nominal dimension (398mm). This would mean the tolerance spread was 1.5% of the nominal.
    I still find it humorous how many posters offer advice and/or solutions when the original poster has not given enough information to even begin to answer the question/problem. If these individuals are considered experts in their organizations, heaven help them all. Of course, a blind squirrel sometimes finds an acorn.
    Vinnie

    0
    #170554

    Ullman
    Member

    Forrest you got it right with a small argument on the value of Cp and CpK.  I think they are extremely useful when properly calculated. CpK is indeed a measure of how well the process is centered and Cp is the amount of variation compared to the spec range. But nothing replaces seeing the data, understanding how it was taken, and doing your own calculations.  In auditing our vendors, I don’t find nearly enough use of control charts and histograms.  The Capability six-pack analysis in Minitab is an excellent tool for capability analysis.

    0
    #170555

    Vinnie
    Member

    Statistical software has made my life so much easier than in the “old ” days when I had to make all these calculations and perform tests of significance with paper, reference tables and calculators. Creating a double sampling plan for attributes could take a day or more.
    The down side? Too many people who don’t have the first clue what they’re doing can enter some values, print out analyses and start making decisions on how to proceed. Unfortunately, many of the posters here and on other sites fall into this group. The blind leading the blind.

    0
    #170572

    DaveS
    Participant

    Forrest,
    I’m curious how your 30000 foot view differs from what is done in MINITAB for instance where the ppm prdicted is indicated? It sounds like exactly the same thing.
    I usually try to persuade owners of the process to focus on ppm predicted. Are you just branding a method that I and many others have focused on for some time?
    Searched ASQ website for articles in Quality Progress by you and also with 30000 as subject or title. Found nothing. Probably just a bad search engine. Can you direct me to a specific issue so I can learn more?
     
    Thanks,
    Dave

    0
    #170588

    Mikel
    Member

    Wow Forrest, you’re making something way more complicated than it really is.
    If people aren’t trained properly, all methods fail.
    Cp and Cpk are central to Six Sigma as defined by the Grand Pubah Mikel. If you want to change the language, you should change your offering to something other than Six Sigma. As I remember Cp and Cpk are covered in your book.
    If they are trained properly Cp, Cpk, Cpt work just fine.

    0
    #170638

    Forrest W. Breyfogle III
    Member

    Stan, Sorry, but I disagree.  I am not making a mountain out of a mole hill! Yes, I do cover Cp and Cpk in Implementing Six Sigma; however, I do also talk about misunderstandings and confusion. What I am describing is not just a unimportant technicality.  This is a big deal!   I agree that proper training is essential.  However, it has been my experience that people are not being trained how standard deviation is determined for various situations when calculating Cp, Cpk, Pp, Ppk calculations.  It is very important for people to understand how this calculation is being made behind the scenes in statistical programs such as Minitab.  If they don’t understand what is happening, they can really be misled. I understand that Cp and Cpk has been core to Six Sigma and Six Sigma quality level calculations.  However, I am assuming that we are operating under the premise that it is more important for Six Sigma to help the business do the right thing, rather than follow some legacy practice that has issues.   There are problems with Cp, Cpk, Pp, and Ppk metrics – if there is general agreement as to what constitutes good metrics.  Three attributes of a good metric that I consider important are honest assessment, peer comparability, and repeatability/reproducibility. Cp, Cpk, Pp, and Ppk metrics have issues with these attributes. For a given process, I wonder what percentage of Six Sigma trained people understand that you can get very different answers for Cp, Cpk, Pp, and Ppk if you sample from a process differently (i.e., lacks good-metric peer comparability and repeatability/reproducibility).  The difference I am referring to is more than any difference caused by chance.  For example:·         Chooses to use an x-bar and R chart rather than an XmR chart for a given subgroup ·         Chooses a less frequent subgrouping time interval than another person What I am describing is only the tip of the iceberg.  There are many more issues and shortcomings with Cp, Cpk, Pp, and Ppk metrics, including that you cannot report Cp, Cpk, Pp, and Ppk unless you have a specification (a goal is not a specification).  What is needed in business is a no nonsense measurement and improvement system throughout the organization.  Performance metrics need to be presented in a form that everyone understands, provides an honest assessment, and is easy to understand.  This is not the case with Cp, Cpk, Pp, and Ppk metrics.  A 30,000-foot-level performance metric analysis approach addresses all these Cp, Cpk, Pp, and Ppk issues – and more.  With a 30,000-foot-level performance metric approach, organizations benefit from a no nonsense reporting system that everyone understands – and can get them out of the firefighting mode.   For specification process outputs, in 30,000-foot-level metric reporting we assess for predictability and when a process is predictable make a futuristic estimated percentage or dpmo non-conformance statement.  If the futuristic statement is not desirable, a Lean Six Sigma project can be undertaken to improve the process.   With 30,000-foot-level reporting, a process capability/performance statement can be made, even when there are no specifications.    For those that are interested, there are several Quality Progress articles that describe 30,000-foot-level metric reporting.  Forrest Breyfogle

    0
    #170652

    Forrest W. Breyfogle III
    Member

    Don’t understand why you could not find my 30,000-foot-level papers in your ASQ search.  As an alternative source, several 30,000-foot-level articles can be found in the “On-line Resource Library” link at http://www.SmarterSolutions.com.  There are many articles in this link.  The 30,000-foot-level articles are in the category “IEE metrics and process improvement.”  Glad you agree that it is better to use ppm as a response for process capability and process performance indices, rather than Cp, Cpk, Pp, and Ppk, which can be very confusing and deceiving.   Yes, for a similar set of data that were normally distributed you would get the same answer as Minitab (Capability Analysis – Normal)’s ppm value.  However, I prefer to use a probability plot to describe process capability/performance, since a probability plot offers more output flexibility and data understanding potential.  Also, I use probability plots to make a process capability/performance metric statement even through there is no specification.   But, probability plotting is not the most important issue. One thing that is often overlooked is that data needs to be from a stable process (in control process) before a process capability statement is valid.  Another point that I don’t hear being stressed is that two people could examine the same process and draw different conclusions about whether a process is in control or not (I prefer and will use the word predictable in lieu of “in control” since people can have a difficult time understand what “in control” means).  Note, I am referring to conclusions that are a function of how they sampled from the process; i.e., not chance.   In my mind, organizations need to improve their overall measurement and improvement system and avoid scorecards such as red-yellow-green, which can lead to the wrong activities for the overall enterprise and fire fighting.  The Integrated Enterprise Excellence (IEE) measurement system accomplishes this, where in IEE there can be a measurement pull for project creation – whenever a prediction measurement is not desirable from an overall enterprise point of view. To achieve this goal, we need to have a measurement system that is not a function of how someone decided to collect samples.  If everyone agrees to this, we need to ask a couple questions to make sure that we are on the same page relative what would be considered as a potential common and special cause input variable source to a high-level control chart.  We should note that the main emphasis of 30,000-foot-level control charting is to provide a high-level view of what the customer experiences, as opposed to identifying a timely issue where we need to “stop the presses” because something went out of control (something that is often taught in classes but in my experience does not often occur in the “real world”).  In presentations on this topic, I describe a hypothetical situation to the audience where raw material to a process changed from day to day, where some characteristic of the raw material did affect the process’ output.  The question is whether raw material should be considered as a common cause source of variability or a special cause source of variability.  After some initial thought and discussion, invariably attendees will agree that raw material should be considered a source of common cause variability.  If we have agreement on this and we also agree that control charting should provide us information that is consistent with our belief system, then we will never use an x-bar and R chart again.    X-bar and R charting does not provide a control chart that is consistent with this belief system.  Control limits for x-bar and R charts are ONLY a function of within subgroup variability, not between subgroup variability.  Individuals control charts provide control limits that are a function of the variability between subgroups (i.e., raw material in this case). In Minitab you could use x-bar and R charts to calculate process capability, which we discussed earlier is not consistent with our belief system.  X-bar and R charts can be very bad and lead to fire fighting. With the 30,000-foot-level approach, you would never use an x-bar and R chart.  I understand how it can be very tough for people to accept that an x-bar and R chart, which is taught in “SPC 101,” has problems.  The 30,000-foot-level control charting articles described above provide more details and show an example, not only for a continuous response output but other outputs as well.    Forrest Breyfogle

    0
    #170654

    Severino
    Participant

    I disagree with you when you say that a goal is not a specification.  Perhaps on the surface they do not appear to be the same thing, but if management states that “we want to be a $5 billion company by 2010” (a goal) it is not difficult to say that the specification is $5 billion minimum (one sided spec limit). 
    If management turns around and says, “We want to keep our inventory as low as possible” then I would state they haven’t even given you a goal at all because it does not give a time to achieve it nor is it specific about what is to be achieved.  If a manager actually issued me such a statement, I would tell them congratulations they’ve already achieved it by giving such a vague goal.
    Having said that, I do agree that Cp, Cpk, etc. have their limitations.  They are data rich in the sense that they attempt to compress a lot of information into a few numbers.  The limitation therefore is not with the indices themselves, but with the data that goes into them.  Few people understand the subtleties necessary to utilize them properly, but if I am not mistaken the same assumptions that go into their calculations are the same assumptions utilized to provide an estimate on the future dpmo.  Therefore, I am confused about what is unique about your proposed approach.

    0
    #170804

    Forrest W. Breyfogle III
    Member

    Relative to the comment that was made: “I disagree with you when you say that a goal is not a specification.” Measurements among other things need to provide an honest assessment, consistency, repeatability/reproducibility, and peer comparability.  It would only seem logical that specifications should follow similar criteria. Since goals are a function of individual opinions, all these objectives are not met when specifications are goals. Specifications need to be something that does not change over time and are independent of people’s opinion.  It is important to have measurements and assessments against specifications that cannot be gamed. Goals are fine but need to be SMART (specific, measurable, actionable, relevant, time based).   Similarly, it only seems logical that a statement how a process is doing relative to a specification should provide an honest assessment, consistency, repeatability/reproducibility, and peer comparability.   All these criteria are not met using Cp, Cpk, Pp, and Ppk process indices. The values for these indices are a function of how the process is sampled.  For example, if someone chooses an x-bar and R chart to monitor a process and then calculates Cp, Cpk,, Pp, and Ppk, from this process control chart data one can get very different process indices values than someone who chooses to track the same process using an XmR chart and then calculate Cp, Cpk, Pp, and Ppk from these process control chart sampled values.  In addition, the calculated Cp, Cpk, Pp, and Ppk values can also vary as a function of the frequency chosen the subgrouping within both types of control charts; e.g., month, day, hour, week.  Note, these differences are not a result of chance but instead how someone chose to sample from the process. I am including the following per your inquiry to address why the system I suggest is unique. All these issues are overcome using the no-nonsense two steps of the Integrated Enterprise Excellence (IEE) system for determining how a process is performing:Determine if a process is predictable (i.e., in control) at the 30,000-foot-level.  To make this assessment, an infrequent subgrouping sampling approach is needed so that normal variation of input data occurs between subgroups. This is unlike traditional control chart where our primary goal is to “stop the presses” when an out of control condition occurs. For example, if raw material changes daily and this could affect our response, then control chart subgrouping could be daily.  If in a call center we expect hold time to change by time of day and day of the week, then control chart subgrouping could be weekly when assessing the predictability of the call-center hold time. Note, at the 30,000-foot-level we are not trying to manage the process – only determine if the process is predictable with the given levels and variation of typical input variables.  With this IEE approach practitioners will typically chose about the same infrequency subgrouping period; hence, the differences between practitioner’s prediction statements will be left to “chance of the draw” when selecting samples. For process that are predictable, we would then make a prediction statement.  If there is a specification, the prediction statement would be percent non-conformance or ppm rate.  If there is no specification, the prediction statement could state the mean value with 80% frequency of occurrence. A Cost of Doing Nothing Differently (CODND) assessment can then be made for the current process to determine if this process should be worked on for improvement relative to other processes.  Note, I prefer CODND to cost of poor quality (COPQ) since CODND provides more flexibility; e.g., you can calculate CODND for WIP (cost of the WIP) but a COPQ calculation is not really appropriate for WIP since there is not a true defect (i.e., lower numbers are better).   Improvement goals for processes should then be SMART; e.g., reduce CODND for WIP by 20% in 7 months.   However, we need to note that Dr. Lloyd Nelson has said, “If you can improve productivity, or sales, or quality, or anything else, by 5 percent next year without a rational plan for improvement, then why were you not doing it last year?”  This statement highlights fundamental problems with the popular red yellow green scorecards, which strive to achieve goals (often throughout the organization), typically with no plan for how improvement will be achieved.  Processes that have SMART goals for improvement are candidates for Lean Six Sigma improvement project.  For a process that has improved or changed, the 30,000-foot-level control chart would need to change to a new region of stability/predictability.  This shift or variability change needs to be assessed statistically (before and after change) when determine if the change was large enough to be consistent with the goal that was established.  It should be highlighted that an IEE deployment has 30,000-foot-level metrics for all value chain organizational steps; i.e., these metrics are not just for manufacturing.  Forrest Breyfogle

    0
    #170814

    Severino
    Participant

    Admittedly I am a novice when compared to one such as you.  In that sense, I am honored that you even took the time to respond to my post and I don’t want to give the impression that my goal here is to flame you in any way.  Rather my goal is to promote intellectual discussion.
    In keeping with this theme, I must say that you still haven’t sold me that there is any difference between a specification and a goal.  You state that a goal is based on opinion and that specifications do not change over time and are independent of people’s opinion.  In general specifications are created when user requirements are translated into technical requirements.  When this happens, opinion (technical opinion, but opinion no less) comes into play. 
    For example, a user may state that I want a car that goes fast and is fuel efficient.  While that seems reasonable it still reflects the desire of one person or group of persons (no matter how large or small that group maybe).  The technical individual may then say that what they want is a car that can do 100mph minimum and gets over 30 mpg.  Here opinion has come into play. 
    Now at the output of the design phase you can in fact go back and validate that the technical requirement met the user requirements, but that is about the only thing that comes close to factual throughout the whole process.  How is the setting of a goal any different?  It isn’t.  To suggest that a specification is unchanged over time is flawed as well.  Were that the case we would still be driving the same cars we did 30 years ago with the same reliability using the same energy sources.  You can state that specifications are frozen for a particular item/process for a time, but then so again are goals (5 year plan) for if the goal changed everyday how could we possibly meet it? 
    I will agree with your SMART acronym, but I say it applies just as much to specifications as it does to goals.  There is no difference.
    As far as the 30,000 foot view and your process of irregular sampling, I will reserve judgment.  It seems to me that you are suggesting that infrequent sampling is a method to ensure that you have captured all sources of variation within reason to make a prediction on the future performance of that process.  What seems to be missing from this (at least as far as you explain) is the legwork of ensuring your process is stable before making such a prediction.  If your process is not stable how can you possibly state how it will perform in the future?  Secondly, I fail to see how “infrequent” sampling would have any statistical advantage over the type of data collection used when constructing a control chart since if I sample every 4 hours and you sample infrequently I am going to have a heck of a lot more data than you with which to make predictions. 
    Please point me to the specific book where you detail this infrequent sampling and future prediction process so that I can digest this information without the need to make inferences about what you are trying to say.  Again, I appreciate the time you took in constructing your reply and wish you the best in your sale of books and consulting services.
     

    0
    #170859

    vmp01
    Member

    Yes Im agree, you must check for desviation around the media, also you must made a MSA(G R&R) to your gauges and the most important firts at all check if you are using the addequate resolution for the device or equipment(s) that you´ll use,  made again the same studie and verify your cpk, if the problem still there then check ths X-R chart in order to see if you note that something change during the studie (points out of control, etc.) if you see something there try to eliminated and repite it again, the question here is elimted all the noise (factors of variability) internal or external. besides check if you´re using the correct tolerances for you process.

    0
    #171044

    Forrest W. Breyfogle III
    Member

    I have a fundamental belief that metrics need to drive the right kind of behavior at both the low level and high level of the business. However, it has been my experience that often business metrics both at a high-level and low-level lead to destructive behaviors.   For example, driving to goals using a red-yellow-green scorecard approach throughout an organization can lead to very bad behaviors.  The goal setting metrics of red-yellow-green can result in counter-productive initiatives, 24/7 firefighting, the blame game, and proliferation of fanciful stories about why goals were not met.   At the time of its writing, the authors of ”Real Numbers: A Lean Accounting System” were financial executives.  The book makes some significant statements.  They state that managers have been forced to understand their departments not in terms of income and cost but as variances to goals that have little relationship to reality.  These same managers have learned that variances could be nudged up or down to present a better of their operation.  The book also stated that complex accounting systems have created somewhat of a fun house mirror where a skinny man can look fat by simply shifting their position.  Often traditional performance metrics have fiscal-year or quarterly groupings, lack a systematic approach for making long-lasting improvements, make comparisons to a previous month or year as point estimates. Traditional management systems of reporting metrics don’t systemically consider the enterprise as a system of processes where the Y process output is a function of the X inputs to the system; i.e., Y=F(X).  To achieve long-lasting improvement one needs a systematic approach to either identify and then adjust key Xs, or implement a fundamental process enhancement.  Simply setting a goal for the Y output does not make things better – this form of management could be referred to as management by hope.  What I am suggesting is a no nonsense system of tracking meaningful metrics throughout the organization; i.e., not just for a Six Sigma project.  Obviously you can do what you want, but hopefully you now at least better understand my point about using goals as a specification when making a process capability statement.  With the system I am suggesting metrics are reported out in time-series fashion where stability/predictability is determined and then if a process is predictable, a no nonsense statement is made about what you predict.   Consider now what can also happen when Cp, Cpk, Pp, and Ppk process capability indices are thrown into the picture – can everyone in the organization really understand what these metrics mean? As if these metrics are not confusing enough, these metrics are often reported for processes that have not been demonstrated to being stable. This is a very big deal.  Consider that a process changed mid year and the process capability index was calculated for the full year.  This would lead to erroneous conclusions about the process capability index value since the index value was calculated from two different levels of the process (before and after change).   Hence it is important to first determine if a process is stable.  For the region of process stability, one can then determine its process capability.   The next question is how to determine if a process is predictable.  An x-bar and R chart does not consider variability between subgroups; hence, x-bar and R charting is not used to determine process stability/predictability at the 30,000-foot-level.  Subgrouping frequency needs to be infrequent enough so that normal input variations occur between subgroups, which affect the control chart limits.   If this is not done, a process that has inputs that affect the process output can make the process appear unstable – this is a big deal.  You asked, “Please point me to the specific book where you detail this infrequent sampling and future prediction process so that I can digest this information without the need to make inferences about what you are trying to say.”  I am only mentioning this because you asked – the Integrated Enterprise Excellence system describes these methodologies in detail; i.e.,  “The Integrated Enterprise Excellence System: An Enhanced, Unified Approach to Balanced Scorecards and Business Improvement” and the three volume series “Integrated Enterprise Excellence: Going Beyond Lean Six Sigma and the Balanced Scorecard.” Hope this helps. Forrest Breyfogle

    0
    #171052

    Mikel
    Member

    What nonsense. Your metrics will be no better. The same
    misunderstanding will prevail.You are just promoting your book.

    0
    #171070

    Forrest W. Breyfogle III
    Member

    Stan,
    Sorry you feel that way.  What you are saying is not true. This is a response that I just got a couple days ago from someone who works at a Fortune 100 company.
    My boss thinks I’ve gone nuts because all I talk about is why we need to adopt IEE – I told him the seminar last week put a fire in my belly! 
    I am also sure sorry that I spent several hours wasting my time trying to address a complex topic in a few words in this discussion forum only to get flamed.  I was asked about my book — guess I should have been rude and not responded to the question.
    Forrest Breyfogle

    0
    #171071

    Forrest W. Breyfogle III
    Member

    If your vendors are not looking at a control chart at the same time when making a process capability statement, they could be bridging the analysis over a process that has changed or is not stable.  If this is the case, then their process capability statements would be questionable.  Also, need to take care when making process capability statements of normality issues. Keep also in my that the frequency of subgrouping and the type of control chart (e.g., x-bar and R versus individuals) can make a large difference in a process capability statements from the same process.
    Forrest Breyfogle    

    0
    #171077

    Mikel
    Member

    Wow you got a guy turned on and he has fire in his belly. Anecdotes are not evidence.All of the criticisms you have of the capability indices are process
    discipline issues. Changing the name of the metric solves nothing
    there.You are just trying to sell a shell game.

    0
    #171081

    Forrest W. Breyfogle III
    Member

    Stan,
    To make sure there is no misunderstanding — it was a woman’s response.
    Thought that in Six Sigma we were supposed to be receptive to innovative ways of doing things and how we should operate with data.  I have not seen any presentation of why the Integrated Enterprise Excellence (IEE) system for reporting measurements in a form that everybody understands is wrong and not beneficial over current methodologies. 
    You have stated that “anecdotes are not evidence.”  I have presented evidence and the mathematics in earlier referenced articles. I have not heard any data and logic statements that support the “nonsense” and “shell game” comments that have been made against the methodology.  These reactions seem to be emotional based; i.e., the same thing that we in Six Sigma discourage people from doing.
    In every face-to-face, open-minded conversation about this measurement methodology, the people I have talked with have bought into to the benefit of this measurement methodology over traditional approaches. 
    Stan, please have a open mind when critiquing this methodology. I am confident that you will like the approach when you give it a chance. How about an off-line phone conversation to discuss?  
    Forrest Breyfogle

    0
    #171109

    Anonymous
    Participant

    Your initial response was to Hector did not ask anything specific to your book. I appears you have taken extreme liberties with generosity of the Forum Monitor and engaged in a large amount of self promotion. I act the victim at this point only makes you appear pathetic.

    0
    #171110

    Anonymous
    Participant

    The is nothing other than words to your “no nonsense approach” the implementation of any system is what defines nonsense versus no nonsense. As a consultant or sales person you sell the product only.
    The entire rest of the post is round words that have little or no substance beyond sounding very boardroom correct.
    Most of us would appreciate it if you were follow forum etiquette and stop the self promotion.

    0
    #171112

    Severino
    Participant

    To be fair, I did ask about the book and he did respond directly.  To also be fair, my intention is not to read the book so that I can suddenly find a fire in my belly, if I decide to read it it would be to pick it apart.  I asked only for the direct reference because:

    Forrest will probably not share the specific details of his methodology since his goal is to sell books
    I did not want to burden the forum further with questions when the answer is already available in a publication (if I read it and still had questions it’d be a different story)
    At the end of the day, I will not go so far as to suggest that Forrest’s books have nothing to offer, but I also doubt very much we are corresponding with the next Juran, Deming, or Shewhart. 
    This whole thread started with questions regarding Cp, Cpk, etc.  These have always been controversial, but so are most statistics where you are trying to lump a lot of information into a single number.  The correct answer is to treat it with the appropriate amount of respect, check your assumptions, monitor your data collection, and remember that these are statistics and not gospel.  Trying to apply cookie cutter methods beyond that to every situation is ludicrous.

    0
    #171113

    Forrest W. Breyfogle III
    Member

    Guess nobody out there appreciates my contributions to this discussion except one person, whose input was apparently was deleted from the forum. 
    —-
    A new message by Dr.Samman was posted in the Discussion Forum.
    Great.
    Congratultion for your “added-value” opinion .
    You are teaching us a good lesson on how to contribute ,
    not like those “fake” pretenders….
    —-
    In all honesty I am not sure how anyone can describe a new methodology or business system without referencing articles. 
    Signing off — I have been flamed enough by Mr/Ms annoymous.  I give. Sorry that I wasted your time with my contribution to this discussion.
    Forrest Breyfogle

    0
    #171115

    George4
    Participant

    I suggest to watch your language when talking to a great author and contributer.He doesnot need to promote his work.We all appreciate his A/V comments .Finally:who are you:Anonymous?
    Just my advice to you

    0
    #171116

    George4
    Participant

    No,no,no
    It is a known fact that Forrest has written one of the best books in Six Sigma.Nobody can deny that.You like it or not :I believe (and many others too) he is the “New Deming”.
    It is pure green “envy” that motivates you to attack such a great author.

    0
    #171117

    George4
    Participant

    It is Cpk

    0
    #171118

    George4
    Participant

    Forrest
    What do you mean by “30.000-foot-level “reporting?
    Please elaborate more.
    Thanks and regards 

    0
    #171119

    George4
    Participant

    Well said
    Therefor Villanova is teaching all SS courses (without minitab),only the simple scientific calculator is required.First understand the statistical concept (click in the mind),later use excel,minitab,etc for solving the problems.

    0
    #171124

    Forrest W. Breyfogle III
    Member

    George,
    I had planned not to give anymore inputs to this question-answer dialog.  It is not fun getting flamed when you are trying to be helpful.  At the risk of getting flamed again by Mr./MS. Anonymous or any other alias, I will answer your question, trying to be very careful (paranoid) of my wording.
     
    A 30,000-foot-level metric is the tracking/reporting of a Y variable process response that is used in the Integrated Enterprise Excellence (IEE) system for describing a project’s primary metric or organizational-value-chain operational metric. The IEE organizational-value-chain-operational-metric reporting is in contrast to red-yellow-green scorecard reporting, which is a common business measurement that can result in a lot of firefighting and/or playing games with the numbers.  (Note – if you are happy with red-yellow-green scorecard reporting, please do not read any further because this methodology is not for you.)
     
    The 30,000-foot-level has two parts.  One part is to determine if a process is predictable, while the second is to make a prediction statement when the process is predictable. This prediction statement is to be in terms that everyone can understand. Since it has been my experience that everyone from the operator to the CEO does not really understand Cp, Cpk, Pp, and Ppk indices reporting, these metrics are not reported as part of a 30,000-foot-level report-out.  An example 30,000-foot-level prediction statement might be a simple log-normal probability plot format showing a 2.1% future non-conformance prediction.   If at the operational level a 2.1% future non-conformance prediction would have a cost of doing nothing differently (CODND) that is excessive and/or is an enterprise constraint, this would be an opportunity for creating a pull for an improvement project.  If there is a value chain owner for this 30,000-foot-level metric, they will surly want to get this project completed as soon as possible – since they understand that their appraisal performance depends upon the successful project completion. (If everyone in your organization is happy with your Cp, Cpk, Pp, and Ppk reporting and there is no confusion, please do not read any further because this methodology is not for you).
     
    The 30,000-foot-level control chart has infrequent subgrouping/sampling so that short-term variations, which might be caused by day-to-day variability of key-process-input-variables (KPIVs), will result in charts that view these perturbations as potential common-cause variability sources to the Y variable.  This is VERY different than traditional control charting, where the main purpose of traditional control charting is to identify assignable-cause conditions and “stop the presses” for problem resolution.  The individuals control chart is the control chart that mathematically accomplishes the 30,000-foot-level control charting objective for determining predictability; i.e., the control limits of an individuals control chart are a function of between subgroup variability.  
     
    When constructing individuals control charts it is important that the subgrouping frequency is long enough so that normal input variations occurs between subgroups.   Another thing when creating a 30,000-foot-level individuals control chart the data needs to be from a normal distribution or false special cause signals can be presented by the chart; i.e., the data for the control chart may need to be transformed before it is plotted on an individuals control chart.   The control limits of an x-bar and R chart, along with the p-chart, are not impacted in any way by between subgroup variability.  Because of this, x-bar and R control charts and p charts are not used in IEE to determine if a process is predictable (If everyone in your organization is happy with your use of x-bar and R charts, along with p charts, and there is no confusing and people always react appropriate to assignable-cause signals for these charts, please do not read any further because this methodology is not for you).
     
    The 30,000-foot-level control chart’s intent is not to provide timely feedback for process intervention and correction, as traditional control charts are to do. Example 30,000-foot-level metrics are lead time, inventory, defective rates, and a critical part dimension. A 30,000-foot-level individuals control chart can reduce the amount of organizational firefighting when used to report operational metrics. As a business metric, 30,000-foot-level reporting can lead to more efficient resource utilization and less playing games with the numbers – along with a pull for project creation when the predicted value needs to be improved from an overall enterprise point of view. (If everyone in your organization is happy with your measurement and improvement system at the enterprise and project-execution level, this measurement and improvement system is not for you).
     
    PS Some people never learn when trying to describe something that they truly believe in, which is different than the norm. What is the probably I will be flamed by someone again?
     
    Forrest Breyfogle
     

    0
    #171126

    Ohno san
    Participant

    I for one appreciate your efforts Forrest. By the way,I loved the movie …Ohno-san

    0
    #171127

    George4
    Participant

    Forrest
    That is really great.I believe that your contributions in this forum is very essential and valuable.Appreciate the time and efforts to answer my question in this elaborative manner.
    Many Thanks 

    0
    #171133

    Mikel
    Member

    George and Ohno – And your results using Forrest’s methods?Oh it’s only theory?Oh it is only theory.

    0
    #171134

    Mikel
    Member

    Forrest,You may want to check the contributions of Dr. Samman – he is a
    poser.Do you know the term?Forrest, instead of presenting theory or anecdotes, give us real
    examples of real results.I appreciate your contributions and use your book – Implementing
    Six Sigma – in fact you’ll find I am responsible for hundreds of
    copies out there in the hands of real BB’s. I do tell them that your
    real contribution is putting all the tools in one place and teaching
    things like box meyers for analyzing variation in an experiment. I also tell them your S4 method is crap. I liken it to Shainin’s secret
    language that must be used to solve a problem.

    0
    #171135

    Mikel
    Member

    Great author? Did Ernest Hemingway make a post while I wasn’t
    looking?

    0
    #171141

    DaveS
    Participant

    Forrest,
    I have not had an opportunity to find the QP articles, but from your descriptions here, I see no difference in what you are pitching; than the best practices for SPC and capability analysis.
    Certainly common cause should be including within the subgroups to prevent over-reaction to signals, but that has always been the case. That appears to be your criteria for “predictability”.  I guess I’m missing something profound?
    I never have liked the Cpk metric that much after reading Wheeler and others. They have espoused the fit histogram and ppm method for years. This is essentially the same as Ppk (expected) that Minitab produces with their capability analysis (normal and non-normal).
    Arguments for proper conduct (including sample size) of the Cpk/Ppk study are certainly true, but as with any method; the basics need to be adhered to.
    While there may be great power in rebottling old wine, I don’t see it justified in this case.
     
     

    0
    #171142

    Anonymous
    Participant

    If he does not need to promote his work why does he do it in every post. Obviously he feels differently about it.
    Who am I? Who are you? Unless you are one of George Foreman’s sons I am sure you surely cannot be named George(4).
     

    0
    #171143

    Anonymous
    Participant

    You appreciate Forest’s time and effort to answer your questions and Forest appreciates you continuing to ask questions so he can make one more self promoting post regardless of the rules of etiquette that are clearly posted on this site.

    0
    #171145

    Anonymous
    Participant

    Forest you have not been flamed by me. I have only asked you to conform to the rules of the Forum and you continue to choose to play the role of the victim.
    It seems less than coincidental that you have chosen to participate in the Forum after all these years at a time when you are promoting your new product.

    0
    #171153

    Forrest W. Breyfogle III
    Member

    In all honesty I don’t know how to respond better within the guidelines of the forum to address your points.  To address your points I have to state things that have been criticized for addressing recently .  Well, here it goes again.
    Remember I am an author and have been working at this for a long time and do have some strong opinions, most of which I have written about previously; i.e., easier to reference an article.
    I have to state the following to defend my actions — It takes focus to have 4 books publish (about 2000 pages total) in one year, plus writing articles and develop/refine the IEE  business methodology (thank goodness I have an understanding wife who lets me work all the time on something I believe in).   Hope this helps you understand why I have not participated in your forum. 
    Note, I am not soliciting that anybody consider any of these book (authors don’t really make much money on book sales anyway — that surely is not the reason that they write a book — they have to have a passion for the stuff). I really don’t think from the comments that I have received that people in this forum would like the books anyway since I suggest that x-bar and R charts, p-charts, and Cp/Cpk/Pp/Ppk not be used as part of business measurements.
    Forrest Breyfogle

    0
    #171156

    Forrest W. Breyfogle III
    Member

    Since you wanted me to check out Dr. Samman, I tried and could not find anything on Google.  I added poser to the search and got your entry to this forum.
    I really would like to talk to you about why you do not like the S4 roadmap methodology in Implementing Six Sigma. Please give call about this.  You comment relative to Shannin’s methodology comparision hurts.  I am not a big fan of that methodology too.
    Would you really like for me to put together a case study showing how IEE turned a plant around in how they drove activities through their measurements?  Or, would this appear as solicitation?
    Forrest Breyfogle

    0
    #171157

    Mikel
    Member

    Forrest,
    I didn’t fall of the turnip truck yesterday. People turn plants around, not IEE

    0
    #171158

    Brandon
    Participant

    Oh Forrest, trying to debate an issue on this forum and expecting reason and logic. You’ve not read enough of the strings to realize this is a futile effort.
    It’s fun giving it a try though; as long as you realize that is all it is…a try.

    0
    #171159

    Stevo
    Member

    Run Forrest, run.

    0
    #171161

    Mikel
    Member

    Saying your measurements are faulty because your process lacks discipline and saying you can solve that with a different measurement is not reason and logic. It’s BS.

    0
    #171162

    Anonymous
    Participant

    You can’t explain what you have done without throwing out your IEE reference at least once per paragraph? The etiquette addresses self promotion and you excuse yourself because you are an author. Since you appear to believe you have more insight than most and a degree of influence that should demand a higher standard rather than a lower standard of behavior.

    0
    #171163

    Anonymous
    Participant

    It would seem unusual for someone to take the position of the exasperated defender of reason and logic after posting that Master Black Belts work for MBA’s. That would be indicative of you having a MBA and not being a Master Black Belt I would suppose.

    0
    #171164

    Mikel
    Member

    Brandon is the consumate promoter of his family’s business

    0
    #171165

    Brandon
    Participant

    Anonymous, although you posted blind, no addressee, I presume you are speaking to me re: MBA & MBB.
    Not that your logic is correct but your conclusion is. MBA, yes, MBB, no.
    So? An MBA prepares one for mngt. An MBB prepares one to teach and conduct projects. An MBB works for mngt, ergo the MBB works for the MBA.
    There’s the logic…but then, as I said to Forrest…you’ll create whatever logic you choose…so I’m just responding for fun.

    0
    #171166

    Brandon
    Participant

    Stan, quote anything from this string that has to do with me promoting anything. Just another arrow shot out at no target.

    0
    #171167

    Anonymous
    Participant

    The post is not blind. The simplest of minds can understand that when you answer a post it is indented beneath the post you have responded to. You have responded to my post and therefore your response is indented beneath mine just as mine was indented beneath yours. I am very happy to see that MBA has done wonders for you.
    The idea that gaining an MBA qualifies anyone as a manager is completely ludicrous. It is a marketing ploy that the universities have sold to create an imaginary caste system within the incompetent corporate ranks.
     

    0
    #171168

    Brandon
    Participant

    And from you being you, Mr/Ms Anonymous, I guess we are to accept that as gospel.
    The pronouncement has been made….no one else should have an opinion.

    0
    #171169

    Brandon
    Participant

    Anonymous, you have no idea what you’re talking about now.
    Just blind insults….much like Stan. Hhmmm….

    0
    #171173

    Anonymous
    Participant

    That seems to be a sensitive area. I would suppose that is because it is perhaps to close to home.
    That is an interesting tactic. You throw an insinuation towards Stan that is completely blind and you believe that will remove the focus from you. You let yourself into this string with some nasty little comment and you don’t wish to deal with the heat.

    0
    #171174

    Brandon
    Participant

    “Sensitive area” – nah, happy with what I do and where I’ve been.
    “Blind toss” re: Stan. Nah, he posted that all I do is self-promote. Not a word of self-promotion in my posts here.
    “In the string with nasty comment.” Nah, told Forrest to quit expecting logic in any debate here….and you are proving me correct. Thanks.

    0
    #171175

    Anonymous
    Participant

    I notice you have respond emotionally and to ignore the issue of your lack of understanding of the mechanics of the forum that you still do not understand it. I am willing to wait while you pull out those books from your elementary school days and find the section on outlining. Perhaps that will clear up this mystery for you.

    0
    #171176

    Brandon
    Participant

    Yeah, I got it…just hadn’t taken notice before.
    Perhaps you should check those elementary school books as well – for grammer –
     “I notice you have respond emotionally and to ignore the issue….”
    I was still able to determine your meaning however.

    0
    #171177

    Anonymous
    Participant

    The response was to emotional to be happy with what you do and where you have been. You took a little break to regain your composure and now the feigned casual attitude. It does take a certain level of arrogance to believe that the entire thrashing about was not transparent to the entire Forum.

    0
    #171178

    Forrest W. Breyfogle III
    Member

    I am trying to promote something that is different.  In all honesty I am not trying to be self promoting, just explaining the system.  I just believe that we need to do something different in our measurements and improvements at the enterprise level.  Sorry I have not done a very good job with my explanation and turned some people off.
    Think it is time to close this string of dialog.  I have felt enough heat.  
    Forrest Breyfogle

    0
    #171179

    Dr. Ravi Pandey
    Participant

    Forrest,
    If you are going to share knowledge, you cannot give up because of others.  Truth never changes irrespective of what people think…
    I wish people were more constructive, but then none of us have the ability to change the world.  Look at the positive influence…not the negative comments.  The so called guardians of Six Sigma are supposed to be data and logic driven and it upon times is disappointing to see some of the things posted.
    btw, I recall your name…I think we met at Dallas DFSS conference several years back.  But I am not sure…may be age is catching on!!!
    Anyway, just my 2 cents.
    rgds
    -ravi Pandey

    0
    #171182

    Mikel
    Member

    The only point is you come to the defense of self promoters because
    you are one.

    0
    #171193

    Brandon
    Participant

    Stan, alright, at least I now understand the logic behind the comment. Although I was not encouraging him…I was telling him to not expect a reasonable discussion here….this is Attitude Central.

    0
    #186415

    Mohammed A.
    Participant

    My research on IEE results in conclusion that Breyfogle is light years ahead in practical knowledge and thinking. IEE is just superb!!!.
    For sure, an open and creative mind is required to understand, digest and implement IEE.
     
     

    0
    #186420

    Severino
    Participant

    You forgot to mention the primary requirement… $157

    0
Viewing 76 posts - 1 through 76 (of 76 total)

The forum ‘General’ is closed to new topics and replies.