This article breaks this expansion process into two stages: 1) creating an FMEA framework and 2) steps to follow for each individual FMEA project.
Design, Process or System
Most companies start their FMEA programs by performing process FMEAs. They often learn quickly that a lot of production problems can be prevented by better design. Correcting problems by design is also more cost effective. So at the start of an FMEA program, it is important to decide if the focus of the FMEAs will be design, process or system.
What will the hierarchy of FMEAs be? For example, if design FMEAs will be applied, a design verification plan and report (DVP&R) will also be needed. If focusing on process FMEAs, control plans are likely to be needed.
It is definitely not necessary to start with all three variables at the same time. But before beginning a particular project, the scope and planning for which techniques will be applied should be determined.
It is also important to consider what language will be used if an organization is internationally located or has geographicallydispersed team members. FMEAs and DVP&Rs are more likely to be developed by highlevel teams and may be in the company’s default business language (often English). Control plans, however, are used on the shop floor and will be in the local language.
Standard vs. Customerspecific
FMEA is an important and effective way to establish process knowledge. To optimize the benefit of this knowledge it is useful to create standard FMEAs for individual, or groups of, process steps. The same can be done for design requirements. To create customerspecific FMEAs, use the standard FMEA as a baseline and add the customerspecific requirements. Using standard FMEAs ensures process knowledge is preserved and every customer benefits from increased process knowledge.
Who Is Going to Do What?
There are too many differences between companies to define a standard approach for FMEA deployment and management for all. That is why each company must define standards for themselves. For example, in some businesses it may seem obvious that the quality manager have the overall responsibility for FMEA. Others, however, find it is more effective to assign this responsibility to the engineering manager. From there, section managers may be responsible for the designs and processes in their sections, and engineers may be responsible for the FMEA for their designs and processes. Other employees have parts to play, however, not just in an FMEA rollout, but also in generating the FMEA and completing its recommendations. All members of this greater team should be included in the rollout; their roles and responsibilities should be made clear.
Be sure to have answers for the following:
Software Selection Worksheets and Ranking Criterion
It is likely that most practitioners did their first FMEAs using a standard spreadsheet and that worked fine. In considering a global rollout, however, there are some possible challenges related to using Excel. When selecting the best software to use for an organization’s FMEA, some questions to consider are:
Additionally, an organization must:
The more intuitive and relevant the ranking sheets are to an organization the more likely they are to be accepted and used.
FMEA Training
As part of an FMEA rollout, there will be a whole range of staff involved who have totally different backgrounds. To ensure a team has the knowledge required to successfully participate in the FMEA process, a spectrum of training programs may be needed, including the following:
Legal Considerations
In many industries, there is a lot of transparency between suppliers and customers. Customers may have access to a range of performance metrics including scrap, cycle time and process capability. Be careful who has access to an FMEA. A good FMEA will contain an organization’s “dirty laundry” – it will list the things that are good enough but could be better. To minimize the risk of an FMEA ending up in the wrong hands, consider enacting the following:
An Asis Process Flow
Process flow is a critical component of a process FMEA. There may already exist a process flow with all of the process steps included, but although useful as a jumping off point, what is needed is an asis process flow. It can be created by walking through the flow on the shop floor with the production personnel. Keep an eye out for steps that may be missed in process flows, but can be significant sources of variation:
Also consider where the process flow starts. If it starts on the production line then it is possible that variation that occurs in the stores or at the kitting steps is not being taken into consideration. While making the process flow it is also important to distinguish between the standard processes and process activities required for specific customers.
Incorporate Current Failure Information
Any organization has a huge amount of information on failure modes and potential causes. These include details of field failures and customer returns, statistical process control outofcontrol points, and maintenance and breakdown reports. Frequently the severity of these items is known, but occurrence and detection information needs to be addressed. Collect this information prior to an FMEA rollout and this information can be used to proactively improve processes.
Determine the Scope of the FMEA
After deciding to run a process FMEA, for example, there is more work to be done before heading into the first team meeting. It is important to define the scope of the FMEA by considering the following:
It is useful to limit the scope of the first few FMEAs to a region of the line. Those limits can help provide insight into problems that are already being addressed or that customers are reporting.
Make the Scope Visible
Have the asis process flow available in FMEA meetings. It is important that all team members be reminded of not only the purpose of the FMEA, but also any restrictions. There will be time for another FMEA on another day to hit a part of the process that is not covered in the active FMEA.
Bring the Team Together
After establishing the process flow and transferring this to the FMEA template, start to think about the team. It is important to remember that an FMEA is a team process and the FMEA will only be as good as the team. There should be a core team who own the FMEA. In addition, there will be experts and knowledge owners who contribute as required. The core team should include the FMEA owner and an expert in running the FMEA process.
Think of the personal characteristics of the team. It is likely that a large organization has its own teambuilding experts; it is useful to consult with them as a team is organized.
Typically the core team will be between five and seven people. Define roles for each team member to help facilitate the FMEA activities.
Establish the Role of Suppliers
Suppliers will have a huge amount of knowledge that will be relevant to the FMEA. They will understand issues other customers have had and the sources of variation within their product. Be tactful and they will share this information and incorporate it into the FMEA. Organize a specific FMEA meeting to assess their information as it pertains to product quality. For example:
Gather Information
Make sure all the current failure information is easily available during meetings and other FMEA planning. Not only should it be available during meetings, it should be distributed in advance of any meetings to give team members time to review this information. The available information will provoke trains of discussion and help to generate new and improved ideas.
FMEA is deceptively simple. Performing one highquality FMEA for a problem process can be taxing. Meetings can get bogged down in debates over severity rankings. The documented process flows can be different than actual process flows. When it is time to roll the process out to a whole organization, these problems may multiply.
Follow the solutions described in this article to make not only individual FMEAs but also organizationwide FMEAs successful from the start.
]]>Forgotten what the rankings are? Want to learn more about how to cement your standing in the process improvement community? Click here to find out how to earn points for your iSixSigma.com reputation!
(Any @Midnight fans out there? As Chris Hardwick would say, “Points!” :) )
]]>The Pugh matrix was developed by, and named for, design engineer Stuart Pugh. A criteriabased matrix, it is used to help a team quantitatively determine from several alternatives the “best” solutions for a given problem. Commonly used for product design, it can be applied to virtually any decision making.
Often used in a team setting, a Pugh matrix facilitates the evaluation of various solutions with respect to defined selection criteria – requirements or desired characteristics of the solution, generally as defined with voiceofthecustomer input. Those solutions can then be compared to a baseline or to one another, blends of ideas can be explored, and the best of all options can be determined by the group.
Consider a family that wants to buy a new car. A simple Pugh matrix analysis of their choices would look like the following.
Solutions  
Selection Criteria  Baseline – Current Car 
Car A  Car B  Car C 
Fuel system  0  2  2  1 
Fourdoor  0  2  2  2 
Miles per gallon  0  2  2  0 
Sound system  0  1  2  1 
Warranty plan  0  0  2  1 
The family’s current car is the baseline for this decision and is marked with all zeros. Each of the alternative car options are evaluated against the criteria in the far left column on a range from 2 to 2, with 2 being a poor match to a given criterion and +2 being the best match to a criterion. Car B, with all best match marks, is the best solution based on the specified criteria.
In more complex analyses, selection criteria can be weighted to more accurately reflect the varying importance of different solution requirements.
To learn more about the Pugh matrix, refer to the following articles and discussions on iSixSigma.com, several of which include realworld examples of this tool’s application:
Before you dismiss this as a rant, it is important to point out that not all consultants are charlatans. The consulting model once was that you would hire someone who was a proven expert, someone who had amassed great expertise and knowledge, and bring them in to advise you on how to better your company. They were proven experts with thousands of hours of handson application of each principle and tool they sought to teach you. I don’t wish to talk about those firms today. In fact, if your consultant meets the Malcolm Gladwell 10,000 hour definition of an expert then you should hire them with no reservations.
The “expert” I want to talk about today is the “instant” consultant; those people and firms that selfproclaim themselves to be continuous improvement experts through publication, education or association. Often these firms are experts in another field and see continuous improvement to be a valuable line extension. Unfortunately, they don’t see the value in investing in actually becoming experts; it’s easier to simply define yourself as an expert. The problem, of course, is that the expertise of these newly minted consultants is superficial and often over branded. Their approaches are formulaic and limited and their branding hinges upon renaming tried and true tools or approaches. This is a toxic soup for continuous improvement because it devalues the real seminal work of others, it consumes real productivity and ultimately destroys future growth and innovation.
McKinsey & Company reports claim only a 30 percent success rate when engaging in significant business transformation. Independent sources acknowledge the high failure rate but put the success rate around 65 percent to 66 percent. That should tell you something – you are twice as likely to succeed without McKinsey as you are with them. Why is that?
Don’t be fooled. The people at the top 5 consultants are very smart. They are capable and they work hard. It’s not the people, it’s the system. The environment in which these consultants work dictates and restricts their success rate. No amount of hard work, superior knowhow or luck is going to overcome this. If you want better results, you need a better system.
There are two camps when it comes to quality and improvement. The first is the traditional accounting and quality audit types who favor ensuring that form is perfect. Their argument is that form defines function and this is a pretty good system for maintaining the status quo. The other camp is the engineering and scientific types who favor deep understanding and mastery of the fundamental drivers of a system. Their argument is that function defines form and this is a good system for driving changes. So ask why you hired your consultant: was it to reinforce the current system and become more like your competition, or was it to make significant changes to the current state and create breakthroughs? Whether function or form dominates, form is still important.
Expertise is, however, what mathematicians call necessary but not sufficient. In order to be a leader of change, people must willingly follow you. When you circumvent your inhouse staff, when you discount your internal experts, you are effectively banking on selfinterests. If your consultant delivers tangible results that are independently sourced, that selfinterest may be enough; but when your consultant leverages internal expertise or makes recommendations but does not follow them through to results, your team will become demoralized and ultimately fail. This will also occur if you place your trust in an outsider and that outsider fails to generate results or merely makes recommendations and then moves on. What it really boils down to is whether all the parties, and especially the consultant, care about the outcome of the transformation you are undertaking. If they don’t, no amount of expertise will overcome this obstacle.
The real litmus test for whether your consultant cares is whether they think about your company and your problems after the solution has been proposed. There are consultants who feel their responsibility ends with recommendations. They focus on your issues for a short while, do the job and then move on. If things work, that’s great and if not, then you must have either misrepresented the issues (causing them to make poor recommendations) or not followed the recommendations. There is no commitment to seeing the issues through to completion; it’s just a job. The real expert consultants, on the other hand, focus on ensuring the work they do has an impact either because they care about your company or because they care about the product they deliver. In both cases, there is a commitment after the sale. Being true to one’s craft means having pride in the outcome and application – not just making the sale.
Finally, there is the issue of knowledge transfer. It is human nature to protect one’s source of power. After all, this is how we make our living. But true experts derive their value from sharing, and thus enhancing, their expertise and experience. They are not threatened by people challenging their recommendations. Challenges simply set them up to explain and defend those recommendations, actually enhancing their cache and position. They are secure in the knowledge that by teaching their customers and constantly expanding their knowhow.
Compare this to the selfappointed experts who are constantly defending their position and power. To share and transfer this diminishes their cache and so they guard it jealously. These are the advisors who, when their recommendations are challenged, focus on the challenger. They have no security in teaching – only in indoctrinating. Since they can only be sure that problems that follow the model upon which their expertise is based will be properly solved, dogma must prevail. Deviations from this dogma are a serious threat; if others know all the details of the process, the advisor has no future role.
Dogma in and of itself can be a good thing if it is accompanied by competence. When competency is high, dogma becomes a set of guiding mileposts or checkpoints to keep the program aligned. When competency is low, unfortunately, these guiding checkpoints are turned outward. Rather than being a positive guide and helping the process, these same guides now become blinders limiting how problems can be solved, these same guides become bludgeons to defend why recommendations must be followed. It’s a toxic combination. Dogma with low competence stifles buyin, destroys confidence (both for the consultant and for those receiving their recommendations), and effectively disenfranchises the local expertise (who often leave) making future success uncertain and unlikely.
So what are we to do? Clearly you wouldn’t be hiring outside help if you knew all the answers and had all the solutions in an acceptable form. You have a need, you need help and the choice of who helps you will make all the difference.
Be aware and beware. Know what your consultant is selling you. Know where their expertise is sourced and how they will use that expertise to improve your business. The good guys won’t hesitate to tell you. They know they have something to offer that transcends the first inquiry and they will be focused on building that expertise in your organization so sharing is never a problem.
Hire experts. And make sure those experts are working on your problem. If you are going to spend money to train someone, spend it on your internal people – they are more likely to stay. You want people who actually know how to solve your problems, not people who are using you as a case study to prepare them for a bigger career.
Know what motivates your consultant. Do they truly care whether you succeed or fail or is your contract just another job? Are they bringing you expertise or are they harvesting the knowhow from your internal people so they can share it later with someone else? Be careful of the assumptions.
Learn to recognize form versus function. Different people think in different ways. Make certain the problem solving style you choose is consistent with the type of problem you have. Hire a formoverfunction consultant for your ISO9000 preparation and they will be great. Hire that person for you continuous improvement team and they will be a disaster. The opposite holds true for your function over form people – great continuous improvement, lousy at ISO.
Ensure knowledge transfer! You have a problem. Most likely you lack expertise and that is why you hired a consultant. If they don’t transfer that expertise to your company, how will you deal with any recurrences of that problem?
Understand that knowledge transfer is different from training. Training focuses on information transfer and behavior modification and this is part, but not everything, that is needed for knowledge transfer to occur. Your goal in bringing in an outside consultant rather than hiring a new employee should be to build the wherewithal internally to solve similar problems to the one you have now. This includes the technical, social and managerial aspects Teams need training but they also need coaching to ensure that the knowledge transfer has occurred.
Take care of your internal experts. Yes, they are probably the people who got you to the point where you need an outside consultant but they are also the people who will manage your processes after your consultant leaves. They also, most likely, know what your consultant will do to solve the problem.
Managed well, a consulting engagement is a fast and effective way to bring expert knowledge and experience into your firm and rapidly change an organization. Managed poorly and that engagement will do more harm than good. It is up to the business executive bringing that consultant in to make sure the engagement is managed well. Never turn over the keys to your business to someone else. Take the time to understand what you are buying and make sure you get what you pay for.
]]>
5S is a system for instilling order and cleanliness in the workplace. Through an emphasis on organization and visual cues, it is a means to reduce waste and improve efficiency.
With its roots in Lean manufacturing in Japan, 5S is a tool now used throughout the process improvement community and may be applied to any workspace – from an office to the factory floor. It is often the first Lean tool a company will use before moving on to other optimization techniques.
In Japanese, the 5S’s stand for the following:
In English, the 5S’s have been translated to keep the “s” as the first letter of each word as follows:
In some businesses, a sixth S is added: safety.
5S is not a tool to be applied one time; rather the principles should be embedded into the daily culture of an organization. Emphasize the fifth S, sustain.
To learn more about 5S and related topics, refer to the following articles, blogs and discussions on iSixSigma.com:
To read more 5S articles, click here.
]]>The German word takt means musical meter or beat. In a manufacturing setting, it is used to refer to the pace, or “beat,” of production. First used as a production management tool in the aircraft industry in Germany, takt time is a measure within Lean manufacturing that represents the rate at which a completed product needs to be finished in order to meet customer demand.
Described mathematically, takt time is:
Available minutes for production / required units of production
For example, a factory operates 1,000 minutes per day. Customer demand is 500 widgets units per day. The takt time, then, is:
1,000 / 500 = 2 minutes
Note that the time available for production should reflect the total number of hours employees work minus time spent on any breaks or meetings.
If the available production time for the factory extended to 1,500 minutes per day, the takt time would change to:
1,500 / 500 = 3 minutes
Why use takt time? Businesses that are following Lean principles want to know the minimum level of resources necessary to deliver to customer demand. Takt time is a tool for doing that – it helps companies establish workflow that follows a consistent and smooth pattern based on demand. The use of takt time works best in settings where the work is repetitive and the average demand is predictable.
As important as describing what takt time is, it is just as important to understand that takt time is not the same as cycle time (unless a business unit clearly defines it as such).
Cycle time is the time it takes to complete a process – the inherent vagueness of that definition is exactly why cycle time is not the same as takt time. Cycle time is broad enough to encompass nearly anything; takt time is one specific calculation.
To learn more about takt time and related topics, explore the following articles and discussions on iSixSigma.com:
In process improvement, determining the root cause of a problem and implementing changes are only part of the recipe for success. Ensuring that there is commitment to the new way of doing things is another vital ingredient. Those collective activities of making a transition in an organization – be it a new process or a change in company policy – are broadly referred to as change management, although often the term is used to emphasize the “soft” side of making change – vision, communication, buyin, training, overcoming resistance and inertia, and so on.
It can be a challenging topic for Lean Six Sigma professionals – not only is it a vast discipline requiring specialized skills, but many people – at all levels of an organization – are innately averse to change. And it requires a shift in focus from numbers and analysis to people and motivations.
Fortunately, there are methods and specific tools for facilitating change and helping an organization achieve continuous improvement goals.
For example, GE utilizes its Change Acceleration Process model, and John Kotter, author and Harvard Business School professor, advocates an eightstep process for leading change:
To learn more about change management and related topics, explore the following articles and discussions on iSixSigma.com:
Additional resources are available for purchase on the iSixSigma Marketplace:
To read more change management articles, click here.
To view a list of change management consultants, click here.
]]>According to respondents of the 11th Annual iSixSigma Global Salary Survey, the average annual worldwide Black Belt (BB) salary declined $224, a decrease of less than 1 percent. The difference is not statistically significant, indicating that salaries have not substantially changed over the past year. Only time will tell if this stagnation is the beginning of a reversal in the overall upward trend of average salaries in the decade since this research was first undertaken, or simply part of the pattern of continued gradual increase over time (Figure 1).
The average salary of Champions, on the other hand, increased dramatically to $127,500, the highest average salary worldwide for Champions since the iSixSigma salary survey first began in 2004. However, with so few Champion data points, not too much can be read into this. Last year the average salary for Champions was $90,833, and $88,929 in the 2012 report (Figure 2).
Although BPs may not be in Six Sigma roles currently, nearly half of these professionals (47 percent) are certified as BBs and 29 percent are certified as MBBs (Figure 3).
Data for the Annual iSixSigma Global Salary Survey is collected from the iSixSigma Job Shop, where participants answer several required questions (e.g., location, highest level of education, salary range, etc.). Only information from those who provided or updated their information within the prior 12 months was included in the analysis.
This years salary survey includes responses from 1,391 participants.
Please note: Survey participants provide salary information by ranges, beginning at < $20,000, then $20,001 to $25,000, continuing at increments of $5,000 to the final salary range of > $200,000. In analyzing the salary data, each range was converted to the median salary for that specific range. For example, < $20,000 was calculated to be $17,500; $20,001 to $25,000 was calculated to be $22,500; and so on.
Black Belt: Fulltime professional who leads Six Sigma projects. Typically has four to five weeks of classroom training in methods, statistical tools and team skills. Sometimes provides coaching and Six Sigma expertise to Green Belts.
Master Black Belt: An expert in Six Sigma methodology and statistical tools who provides strategic Six Sigma guidance within a specific function or business unit. An MBB often has prior experience as a BB. Responsible for coaching, mentoring and/or training BBs, an MBB often helps the Six Sigma Deployment Leader and Champions keep the initiative on track.
Champion: Middle or seniorlevel executive who sponsors a specific Six Sigma project or effort, ensuring that resources are available and crossfunctional issues are resolved.
Deployment Leader: Seniorlevel executive responsible for implementing Six Sigma enterprise wide. Typically reports to higher Clevel executives. Responsible for developing, implementing and maintaining a standardized, companywide quality system focused on customer satisfaction, defect prevention and continuous improvement.
Quality Professional and Quality Executive: While not universally regarded as Six Sigma roles, QPs and QEs may have Six Sigma responsibilities and qualifications.
Business Professional: Although they may not be in Six Sigma roles currently, the majority of these professionals possess Six Sigma certification and may have project experience.
Rehoboth, Massachusetts (July 20, 2014) – Six Sigma Integration, Inc. (www.sixsigmaintegration.com) announces the release of Lean Six Sigma for Supply Chain Management, Second Edition. Sales of the book have already begun. Orders are being accepted at:
http://www.mhprofessional.com/product.php?isbn=0071793054
Lean Six Sigma for Supply Chain Management, Second Edition is fully revised to cover recent dramatic developments in supply chain improvement methodologies, this strategic guide brings together the Six Sigma and Lean manufacturing tools and techniques required to eliminate supply chain issues and increase profitability. This updated edition offers new coverage of enterprise kaizen events, big data analytics, customer loyalty metrics, security, sustainability, and design for excellence.
The structured 10Step Solution Process presented in the book ensures that clear goals are established and tactical objectives are consistently met through the deployment of aligned Lean Six Sigma projects. Written by a Master Black Belt and Lean Six Sigma consultant, this practical resource also provides an inventory model and Excel templates for download at www.mhprofessional.com/LSSSCM2.
Lean Six Sigma for Supply Chain Management, Second Edition, covers:
Hardcover: 400 pages
Publisher: McGrawHill Professional; 2 edition (May 13, 2014)
Language: English
ISBN10: 0071793054
ISBN13: 9780071793056
Complete with roadmaps and checklists, this book will help busy Supply Chain and Lean Six Sigma professionals discover more efficient ways manage and analyze business processes, and ultimately—increase overall operational efficiency.
James William Martin is president of consulting firm Six Sigma Integration, Inc. As a Lean Six Sigma consultant and master black belt for 10 years, he has trained and mentored more than 3,000 belts, executives, and deployment champions worldwide in a dozen different industries. He is also the author of Unexpected Consequences: Why the Things We Trust Fail; Lean Six Sigma for the Office, Operational Excellence: Using Lean Six Sigma to Translate Customer Value through Global Supply Chains; and Measuring and Improving Performance: Information Technology Applications in Lean Systems. He has also served as an instructor at the Providence College Graduate School of Business since 1988. His degrees are: M.S. in mechanical engineering from Northeastern University, M.B.A. from Providence College, and B.S. in industrial engineering from the University of Rhode Island.
A simulation study was undertaken to quantify the relationship between guard banding, percent tolerance (also known as the precisiontotolerance [P/T] ratio), and the probability of misclassification – all in the presence of varying distributions for both part values and gage error. A response surface designed experiment was utilized to generate a balanced set of factor level combinations. The following four factors were used to summarize the important characteristics of the gage and part distributions:
1. True value capability index, P_{pk}. This factor describes variance of the true value population with respect to specification limits, including the centering of the true value mean, µ, within the specification limits.
Where CPL = the difference of the center line and the LSL (lower specification limit) and CPU = the difference of the USL (upper specification limit) and the center line:
2. The ratio
Where P_{p} is described by equation (9b). This factor describes the “centeredness” of the true value population within the specification limits.
3. The ratio
This factor describes gage variance with respect to true value variance. This is effectively the inverse of percent process [see equation (4) in Part 1] divided by 100 percent.
4. Guard band count taken as k, where k (11), is the number of gage standard deviations (σ_{g}) taken within each specification limit to establish guard banded specifications.
Assuming normal distributions for gage variance and true value populations, each of these four factors can be used to establish probability density functions for gage variability and “true” value population with respect to specification limits, but without having absolute values of gage variance and true value population. (Note: In reality, there is no such thing as a “true” value since GR&R only seeks to understand the precision of the measurements versus some measurement space [either the tolerance or the observed spread of the parts – which also includes the gage variation]. References to a “true” value imply that accuracy was studied but that is not included in this article.)
These probability functions can be used to calculate percent tolerance and probability of misclassification. The four factors can be combined to establish a space of typical gage variance, true value population variance and guard banding, which are then used to map percent tolerance and probability of misclassification over the various combinations of these factors. Once these two gage metrics are mapped over combinations of these factors, relationships between the two metrics can be established over the same space. If a 1sided specification equation is not available, however, the same probability distribution functions can be established and the same mapping is still possible.
These four factors are not independent from one another, and when combined to map and compare gage metrics they will not form an orthogonal comparison grid, where grid lines are perpendicular to intersection. However, comparison between gage metrics over ranges of each factor provides a means to draw general conclusions about the effectiveness of each metric in various circumstances without having absolute measured values. The lack of orthogonality must be taken into account when making this comparison, but it does not preclude obtaining some useful information as a result of the study.
Percent tolerance and probability of misclassification can be modeled over combinations of the four factors using response surface analysis, in which models can be represented using contour plots of the fitted surfaces. The type of response surface design of experiment (DOE) chosen for study is a central composite design (CCD).^{1} In this type of design, factorial combinations of factors – along with center points and axial points – are used to structure study inputs. The resulting data can be used to fit models involving primary factors, their interactions and second order polynomial terms. The ratio defined in (10) can vary over multiple orders of magnitude; for this simulation the input was converted to a natural log scale that forced a range over multiple orders of magnitude to be input on a linear scale. Experimental results can be used to estimate multifactor regression equations, which can then be used to numerically predict responses over the design space.
The design space is chosen to represent typical ranges of equations (8), (9), (10) and (11), while avoiding conditions where combinations of (10) and (11) would satisfy (7), thereby resulting in null values for probability of misclassification. The values of axial points are selected as 1.1 times larger than the extent of primary factorial points away from the center point for similar reasons. The lower axial point for (11) is set to zero to avoid a negative value. The values of the factorial points, center points and axial points for the three remaining inputs to the design are shown in Table 1.
Table 1: DOE Inputs for Response Surface Mapping Percent Tolerance and Probability of Misclassification  
Factor  Center  Factorial Lower, Upper  Axial Lower, Upper 
P_{pk }/ P_{p}  1.0  0.5, 1.0  0.45 
P_{pk}  1.0  0.5, 1.5  0.45, 1.55 
1.5  0, 3  0.15, 3.15  
Guard band k  1.0  0, 2  0, 2.1 
Given a value for each of the four factors and known specification limits, the values of process mean, gage variance and process variance can be calculated. From here, percent tolerance and probability of misclassification can be estimated. An LSL of zero and USL of 100 were used to numerically simulate probability distribution functions for gage variance and true value population based on combinations of equations (8a), (9a) and the natural log of (10). These probability distribution functions were used to calculate percent tolerance according to equation (5) and to estimate the probability of misclassifying a bad unit as good and a good unit as bad for each combination of the four factors in the CCD DOE. All outputs were found to vary over more than 2 orders of magnitude and, as a result, the outputs were analyzed using a natural log scale to simplify analysis. (Population simulation, probability of misclassification estimation and CCD DOE analysis were done with Minitab v16.)
The DOE analysis of variance (ANOVA) table provides information regarding significant terms and lackoffit for each of the outputs studied. Significant terms are taken as having a p value less than 0.05.^{1} The predicted R^{2} is chosen to determine lackoffit and usefulness of the model to predict results. Predicted R^{2} captures the percentage of a response variation explained by relationships with inputs using predicted model output versus observed output to quantify lack of fit. The second order polynomial term for (9a) was found to be significant to the probability of good observed bad and percent tolerance. The influence of (9a) on percent tolerance is due to nonorthogonality of input factors. Based on observation of insignificant first order terms and interaction terms for (9a), the DOE input factors were reduced and interaction terms including (9a) were removed. The first order and second order term associated with (9a) were left in subsequent analysis due to the significance of the second order term in the model for probability of good observed bad. Goodness of model fit and factor significance for each of the three measurement system analysis metrics using the reduced model terms are shown in Table 2.
Table 2: Goodness of Model Fit and Factor Significance  
Bad Observed as Good  Good Observed as Bad  Percentage Tolerance  
R% predicted  97.4  98.44  99.99 
pvalues  
Constant  0.166  0.001  0 
P_{pk}  0.68  0.008  0 
P_{pk} / P_{p}  1  1  1 
0.005  0  0  
Guard Band k  0  0.068  0.024 
P_{pk} * P_{pk}  0  0  0 
P_{pk} / P_{k} * P_{pk} / P_{k} 
0.057  0.002  0 
*  0.67  0  0.01 
Guard Band k * Guard Band k  0.188  0.491  0.01 
P_{pk} *  0.007  0  1 
P_{pk} * Guard Band k  0  0  1 
* Guard Band k  0.028  0.109  1 
Response surface contour plots for the P/T ratio and the probability of bad misclassified as good are overlaid in Figure 4 over the studied range of and P_{pk}. Two overlaid contour plots are drawn for guard band k = 0 and 2 respectively. Both plots have a fixed value P_{pk} ⁄ P_{p} = 1.
The bands defined by the adjacent contour lines indicate sensitivity of each output to the input factors on each axis. The probability of misclassifying bad as good shows the most sensitivity to P_{pk} and is relatively insensitive to . The opposite is true for P/T ratio. This trend holds true for both plots at two different guard band values. This sensitivity analysis establishes that the probability of misclassification is more dependent on the probability that a value is bad or good, as opposed to the probability that the measured value is different from the true value. Curvature is shown in the P/T ratio response, which indicates sensitivity to P_{pk} and ; this curvature is due to nonorthogonally of the input factors. According to equation (5), the P/T ratio is not dependent on process standard deviation – and is only dependent on process mean for onesided specifications. In this model, however, gage standard deviation is established based on a ratio with process standard deviation, and process standard deviation is an input to the factors on each plot axis.
The influence of guard banding on each of the two outputs is established by comparing the two plots in Figure 4. P/T ratio does not change as a function of guard banding, which is expected according to equation (5). The probability of misclassifying bad as good changes such that the probability is reduced for lower values of process capability. Guard banding has more influence on the probability of misclassifying bad as good when the probability is greater than 1 in 1,000,000; for values equal to or less than this level, guard banding has a smaller influence on reducing the probability of misclassification (i.e., at a higher process capability).
The difference in sensitivity of each output over the plot range at both values of guard banding illustrates four conditions for gage precision, as defined by P/T ratio, and probability of misclassification. They are:
P/T ratio is within typical acceptance limits and the probability of misclassifying bad as good is relatively small.
P/T ratio is larger than typical acceptance limits; however, the probability of misclassifying bad as good is relatively small.
P/T ratio is larger than typical acceptance limits and the probability of misclassifying bad as good is relatively large.
P/T ratio is within typical acceptance limits and the probability of misclassifying bad as good is relatively large.
For conditions 1 and 3, the P/T ratio and probability of misclassifying bad as good agree in their assessment of gage suitability for decision making. In condition 1, the gage is generally considered suitable. In condition 3 the gage is generally considered illsuited for decision making. For conditions 2 and 4, the P/T ratio and probability of misclassifying bad and good disagree in their assessment of gage suitability for decision making. In condition 2, the gage is considered imprecise; however, the underlying true value population is sufficiently far away from specification values as to minimize the probability of misclassifying bad as good. This condition may avoid risk of false acceptance by misclassifying nonconforming values as good, but additional cost may reside in the probability of misclassifying good as bad. In condition 4, the gage is considered precise, but the underlying true value population is close enough to specification values such that the probability of misclassifying bad values as good remains high. Here the gage may be precise enough to differentiate values within the specification tolerance, but the magnitude of measurement error is still large enough to warrant significant risk in using the measurement system to sort values, where the sort condition is based on specification limits.
Guard banding reduces the probability of misclassifying bad as good, thereby increasing the suitability of a measurement system for making effective decisions at lower values of process capability. The probability of misclassifying bad as good is not reduced to zero over the entire range of P_{pk} shown. Even for guard banding at 2σ_{g}, the probability of misclassifying bad as good can remain relatively high at low process capability.
Guard banding has been shown to increase the probability of misclassifying good as bad. This is illustrated in Figure 5 in which response surface contour plots for the probability of good misclassified as bad are overlaid with the same contours shown in Figure 4. Plot ranges of and P_{pk} are the same as in Figure 4. Two overlaid contour plots are drawn for guard band k = 0 and 2, respectively. Both plots have a fixed value P_{pk} ⁄ P_{p} = 1.
As in Figure 4, the bands defined by the adjacent contour lines indicate sensitivity of each output to the input factors on each axis. The probability of misclassifying good as bad is nearly equally sensitive to P_{pk} and ; probability of misclassifying good as bad increases as P_{pk} or decrease. The influence of guard banding on the probability of misclassifying good as bad is seen by comparing the two plots in Figure 5. When no guard banding is applied, the probability of misclassifying good as bad is less than 1 percent over the range of P_{pk} and , where P/T and the probability of misclassifying bad as good would be considered generally acceptable. When guard banding is applied, the probability of misclassifying good as bad is found to increase at lower values of P_{pk} and . For extremely low values of P_{pk} and shown in the plot, guard banding is shown to satisfy the condition defined by equation (7); the result indicates that all values would be classified as bad. Comparing the two probabilities of misclassification illustrates that as guard banding is applied, the probability of bad misclassified as good will decrease, but the probability of good misclassified as bad will increase. This tradeoff is most prevalent at low values of P_{pk} and .
The four conditions previously established can be summarized to include information on the probability of misclassifying good as bad.
P_{pk} < 1.3  P_{pk} > 1.3  
> 2.0  Condition 4

Condition 1

< 2.0  Condition 3

Condition 2

For conditions 1 and 3, the P/T ratio and probabilities of misclassification agree in their assessment of gage suitability for decision making. Based on this analysis, a gage is suited for decision making in condition 1. For condition 2, the imprecision of the gage does not jeopardize the risk of false acceptance. A significant risk of false reject (i.e., excess scrap), however, may exist, and the gage’s suitability for decision making is conditional based on the circumstances surrounding the measurement. For condition 4, the gage’s precision may not effectively eliminate the risk of misclassifying good values as bad or bad values as good; the gage may not be considered suitable for applications of sorting values within a population where the sorting condition is based on specification limits.
The aforementioned DOE simulation approach and results are based on normal distribution assumptions for underlying true value populations and gage error. Although gage error typically follows a normal distribution, underlying true value populations may not be. Typical GR&R results should be relatively insensitive to nonnormal data sets, where the probability of misclassification will be sensitive.
Lognormal true value populations and any true value population with significant skew will not have symmetric probability density functions about the distribution arithmetic mean. As a result, these DOE simulations cannot be generalized based on the exact same factors; results need to be based on absolute distribution position relative to specification values. Initial attempts to repeat this trial using nonnormal distributions suggest general agreement with these results, except that the results are dependent on the absolute position of the population within the specification limits.
Comparison of GR&R metrics, namely P/T ratio, with the probabilities of misclassification reveals that precise measurement systems may still generate results with significant probabilities of misclassification. A DOE study using numeric simulation revealed that the most significant risk of misclassifying measured values exists when a gage is considered precise with respect to specification tolerance, but the population of values being measured resides near or outside of specification tolerance. Where true value population is defined by a P_{pk} capability metric less than 0.75, a gage may be classified as suitable with a P/T ratio less than 10 percent, while probabilities of misclassifying good values as bad or bad values as good may be greater than 1 in 10,000.
Guard banding influences probabilities of misclassification, and can be used to reduce the probability of misclassifying bad values as good at the cost of increasing the probability of misclassifying good values as bad. Guard banding has the most influence on probability of misclassification for imprecise measurement systems and when a population of values being measured has low capability index.^{2}
1. Montgomery, D. C., and G. C. Runger. “Gage Capability and Designed Experiments Part II: Experimental Design MEthods and Variance Component Estimation.” Quality Engineering 6 (1993b): 289305.
2. Taylor, Wayne. “Generic SOP – Statistical Methods for Measurement System Variation, Appendix B Gage R&R Study.” Wayne Taylor Engerprises, Inc., n.d