USL, upper specification limit; LSL, lower specification limit.
*Estimated sigma = average range/d2
Common understanding includes the fact that C_{pk} climbs as a process improves – the higher the C_{pk}, the better the product or process. Using the formula above, it’s easy to calculate C_{pk} once the mean, standard deviation, and upper and lower specification limits are known.
But what if you have only one specification or tolerance – for example, an upper, but no lower, tolerance? Is C_{pk} advisable under these circumstances?
When faced with a missing specification, consider one of the following three options:
Examining a specific situation may clarify the outcome of each of these possibilities. A customer of a plastic pellet manufacturer has specified that the pellets should have a low amount of moisture content. The lower the moisture content, the better, but no more than 0.5 units is allowed; too much moisture will create manufacturing problems for the customer. The process is in statistical control.
This customer would undoubtedly not be satisfied with option 1 as C_{pk} has been specifically requested. With option 2, it could be argued that the LSL is 0, since moisture levels below zero are impossible. With a USL at 0.5 and LSL at 0, the C_{pk} calculation would be as follows.
Assume the Xbar = 0.0025 and estimated sigma is 0.15.
The customer is not likely to be satisfied with a C_{pk} of 0.005, and that number does not represent the process capability accurately.
Option 3 assumes that the lower specification is missing. Without an LSL, Z_{lower} is missing or nonexistent. Z_{min} becomes Z_{upper} and C_{pk} becomes Z_{upper} / 3.
Z_{upper} = 3.316 (from above)
C_{pk} = 3.316 / 3 = 1.10
A C_{pk} of 1.10 is more realistic than one of 0.005 for the data given in this example, and is more representative of the process itself.
As the example demonstrates, setting the lower specification to 0 results in a lower, misleading C_{pk}. In fact, as the process improves (here, moisture content decreases), the C_{pk} should have increased. If 0 was used as the LSL, however, the C_{pk} would have decreased. This is one clue that entering an arbitrary specification is not advised.
“What should be done when only one specification exists?” The (only) specification you have should be used, and the other specification should be left out of consideration or treated as missing. In these cases, use only Z_{upper} or Z_{lower}.
C_{pk} can and should be calculated when only one specification exists, provided only the remaining valid specification is used. As the example demonstrates, the missing specification should remain missing and not be artificially inserted into the calculation.
]]>While one of the statistical methods widely used in the Analyze phase is regression analysis, there are situations that warrant the use of other nonparametric methods. Violation of the basic assumptions of normally and independently distributed residuals, and the presence of nonlinear relationships, are the most common situations where using a nonparametric method, such as a classification and regression tree (CART), is more appropriate. In addition, CART can be appropriate in service industries such as banking and healthcare where many potential causes of variation and defects are categorical in nature (e.g., geographical locations, products, channels, partners). The problem with using regression or generalized linear models (GLM) in such cases is that a lot of dummy variables make it difficult to interpret the results. CART is a useful nonparametric technique that can be used to explain a continuous or categorical dependent variable in terms of multiple independent variables. The independent variables can be continuous or categorical. CART employs a partitioning approach generally known as “divide and conquer.”
Assume there is a set of credit card transactions labeled as fraudulent or authentic. There are two attributes of each transaction: amount (of transaction) and age of customer. Figure 1 displays an example map of fraudulent and authentic transactions.
The CART algorithm works to find the independent variable that creates the best homogeneous group when splitting the data. For a classification problem where the response variable is categorical, this is decided by calculating the information gained based upon the entropy resulting from the split. For numeric response, homogeneity is measured by statistics such as standard deviation or variance. (For more information on this please refer to Machine Learning with R by Brett Lantz.)
Two important parameters of the CART technique are the minimum split criterion and the complexity parameter (C_{p}). The minimum split criterion is the minimum number of records that must be present in a node before a split can be attempted. This has to be specified at the outset. C_{p} is a complexity parameter that avoids splitting those nodes that are obviously not worthwhile. Another way to consider these parameters is that the C_{p} value is determined after “growing the tree” and the optimal value is used to “prune the tree.”
In this example, Figure 2 shows that the first rule formed is x2 > 35 → fraudulent transaction. Similarly, other rules are formed as shown in Figures 3 and 4.
In this way, the CART algorithm keeps dividing the data set until each “leaf” node is left with the minimum number of records as specified by minimum split criterion. This results in a treelike structure as shown in Figure 5. The C_{p} value is then plotted against various levels of the tree and the optimum value is used to prune the tree.
The following example contains a hypothetical dataset of 600 dispatch transactions of a bank.
The dependent variable is the attribute “defective,” which is a categorical variable with two classes (yes and no). Each transaction is labeled either “yes” or “no” based on whether there is any printing error in the deliverable. The independent variables are “amount,” “channel,” “service type,” “customer category” and “department involved.” The first step in applying any analytical method is to explore the data using descriptive statistics. Assume that in exploring the data all of the independent variables seem to have a significant relationship with the dependent variable. In order to carry out the CART analysis, the dataset is randomly split into two sets, the training and testing sets. Nonparametric studies are not based upon theoreticalprobability distributions; it is widely accepted practice to build a model on one set of data and test it on another. This helps in ascertaining the accuracy of the model on unknown future records.
The CART model is used to find out the relationship among defective transactions and “amount,” “channel,” “service type,” “customer category” and “department involved.” After building the model, the C_{p} value is checked across the levels of tree to find out the optimum level at which the relative error is minimum. The optimum C_{p} value is then used to prune the tree.
Postpruning, the “final” tree can be created as shown in Figure 8. The model can also be validated against test data to ascertain its accuracy.
As with other nonparametric techniques, CART does not require any assumptions for underlying distributions. It is easy to use and can quickly provide valuable insights into massive amounts of data. These insights can be further used to drill down to a particular cause and find effective, quick solutions. The solution is easily interpretable, intuitive and can be verified with existing data; it is a good way to present solutions to management.
Like any technique, CART also has limitations to take into account before doing the analysis and making any decisions. The biggest limitation is the fact that it is a nonparametric technique; it is not recommended to make any generalization on the underlying phenomenon based upon the results observed. Although the rules obtained through the analysis can be tested on new data, it must be remembered that the model is built based upon the sample without making any inference about the underlying probability distribution. In addition to this, another limitation of CART is that the tree becomes quite complex after seven or eight layers. Interpreting the results in this situation is not intuitive.
Conclusion
CART can be used efficiently to assess massive datasets and can provide quick solutions in the Analyze phase of DMAIC. CART can be one of the quickest and most effective tools in the bag of any process improvement practitioner. CART should not, however, replace corresponding parametric techniques. The latter is always more powerful in terms of explaining any phenomenon owing to the nature of underlying distribution.
]]>Contact
Barbara A. Cleary, PhD
8007773020
Dayton, Ohio, April 14, 2017—An update to GAGEpack 12 from PQ Systems demonstrates the commitment of developers to respond to customer needs with enhanced ease of use and improved user interface.
In this updated version, visual improvements streamline the window view by collapsing the navigation panel and moving it to the ribbon, making filter information more readily available, and altering lessused commands by invoking them through buttons.
Additional improvements to the user experience include:
Customers with maintenance agreements will receive the updated solution automatically, allowing for a smooth transition to the use of new features.
GAGEpack is a powerful gage calibration solution that maintains complete histories of measurement devices, instruments, and gages. To guarantee timely calibration, the software provides a variety of tools, such as:
o Calibration schedules and reports
o Alerts about failed and past due calibrations
o Gage location and status tracking
o Gage repair records
o Audit trail for traceability
o A Task tab with a “To do” list
o Gage event alert system
About PQ Systems: PQ Systems www.pqsystems.com is a privatelyheld company headquartered in Dayton, OH, with representation in Europe, Australia, Central and South America, Asia, and Africa and customers in more than 60 countries. For more than 30 years, the company has been helping businesses drive strategic quality outcomes by providing intuitive solutions to help manufacturers optimize process performance, improve product quality, and mitigate supply chain risk. The company’s scalable solutions include SQCpack® for data analysis and statistical process control and GAGEpack® for measurement intelligence. PQ Systems’ worldclass consulting, training, and support services ensure that clients receive the maximum return on their software implementations.
####
]]>There are two types of metrics to consider when selecting KPIs for a project: outcome metrics and process metrics.
Outcome metrics provide insight into the output, or end result, of a process. Outcome metrics typically have an associated datalag due to time passing before the outcome of a process is known. The primary outcome metric for a project is typically identified by project teams early on in their project work. This metric for most projects can be found by answering the question, “What are you trying to accomplish?”
Process metrics provide feedback on the performance of elements of the process as it happens. It is common for process metrics to focus on the identified drivers of process performance. Process metrics can provide a preview of process performance for project teams and allow them to work proactively to address performance concerns.
Consider an example of KPIs for a healthcarefocused improvement project:
In the example above the project has one primary outcome metric and four process metrics that compose the KPIs the team is monitoring. Wellcrafted improvement project KPIs will include both outcome metrics and process metrics. Having a mix of both provides the balance of information that the team needs to successfully monitor performance and progress towards goals.
Teams should develop no more than three to six KPIs for a project. Moving beyond six metrics can dilute the effects of the data and make it more challenging to effectively communicate the progress of a project.
Common questions coaches can use with teams to generate conversation about potential KPIs include:
Coaches should keep the three Ms of crafting KPIs in mind when working with teams.
Remember that successful KPIs:
Crafting KPIs is an important step to guide teams through a continuous improvement process. A coach needs to keep the team focused on what success looks like and how best to measure it.
]]>Depending on the environment, organization’s maturity, people and processes that must be dealt with, LSS practitioners may be in situations that prevent them from following a textbook project. These situations may include:
Assuming one or more of the above elements are true, running a project can be a challenge. Not only can everything one has learned be viewed as a waste of potential and time, but a project may also be seen as too difficult to attempt. A simple way to turn these challenges into an opportunity is by extracting the maximum from the existing situation instead of fighting against it. A couple of examples demonstrate that using individual tools when they fit the purpose can be as rewarding as applying the whole framework – by the book.
Every LSS training or guideline instructs to start a project by analyzing the voice of the customer (VOC). But what if there is no project and the practitioner does not interact with the endproduct customer? There is a trick to adjust the VOC tool to help improve the organization.
Treat the SMEs as the customers. Let’s take the simplest scenario where one LSS expert is assigned to a team of SMEs attempting to improve their own processes. The objective here is threefold:
When it comes to implementing these principles, as with any customer feedback, the key is to establish a structured method for gathering, storing and reviewing the improvement ideas. The following are some tips that can assist in building a simple mechanism to manage a team’s improvement ideas:
With numerous “customers,” this process is more complicated; however, given team discipline and collaboration it can turn into success. The end goal should be for the team to become selfsufficient in improving their processes when the LSS expert is no longer available.
VOC is not the only instrument that teams can use by themselves. Other examples of useful tools that every team can use in everyday work include project management documents that bring structure to every initiative.
Even if LSS is not used in everyday operations, a smart expert can still smuggle a few useful tools into the workplace. This is because every organization runs projects; all projects typically bring change and opportunities for improvement. As these tools are simple and universal, no matter what methodology an organization uses, LSS best practices around project management documentation can often be the first big win. This can apply to any initiative, starting with a local team project and finishing with a global organizational change. Here are some examples of tools that each person running a project should befriend:
What if the LSS expert is assigned to a team that does not run any projects? One might argue that the space for improving their operations is limited. There is a tool, though, that can be applied in any circumstances and implemented by the team independently – 5S (sort, set in order, shine, standardize, sustain) only sounds like one tool, but it is by far one of the most helpful. Apart from the visible improvements 5S offers in each of the five phases of DMAIC (Define, Measure, Analyze, Improve, Control), 5S offers plenty of opportunities to embed the continuous improvement mindset quickly and effectively. (Again, if it is not possible to use all the elements at once, fitforpurpose is the most sensible approach to follow.)
In operations like human resources, finance or outsourcing, some 5S techniques can be applied as successfully as in manufacturing. Good analogies for a service environment relate to virtual workplaces. Some examples include setting up document repositories and shared locations, standardizing service inputs or outputs, and keeping the PC workplace tidy.
The previous examples demonstrate how to improve an existing process with little effort. But what if the process is not there yet? In such instances, the LSS expert might be asked to design and implement an activity that was not previously performed.
When establishing a new function, going through reorganization or simply starting a new activity, a couple of LSS tools can be utilized to help define and document the change taking place.
When the reality is different from what was taught during training, the choices are to give up or adjust one’s approach. By using a fitforpurpose approach and simplifying the tools, it is easier to make the tools easier to remember and, therefore, encourage the staff to use them more often. Many small improvements have a big chance of translating into a continuous improvement culture for the whole organization.
]]>Press Contact
Diane Tilley
(888) 7446295
Kitchener, Ontario February 27, 2017—SigmaXL Inc., a leading provider of user friendly Excel Addins for Statistical and Graphical analysis, announces the release of SigmaXL Version 8.
“SigmaXL was designed from the ground up to be a costeffective, powerful, but easy to use tool that enables users to measure, analyze, improve and control their service, transactional, and manufacturing processes. As an addin to the already familiar Microsoft Excel, SigmaXL is ideal for Lean Six Sigma training or use in a college statistics course. Our slogan for Version 8 is Multiple Comparisons Made Easy,” said John Noguera, CTO, SigmaXL.
New features in Version 8 include:
Dr. Peter Wludyka, coauthor of the book, The Analysis of Means: A Graphical Method for Comparing Means, Rates, and Proportions, “I am happy to endorse the ANOM charts introduced in SigmaXL Version 8. They are easy to use and accurately handle balanced and unbalanced data. We collaborated to extend Multiway Slicing to Binomial and Poisson and these are included in the TwoWay charts, where SigmaXL automatically recommends Slice Charts when the interaction is significant.”A free 30day trial version is available for download from the SigmaXL website at: www.SigmaXL.com.
About SigmaXL Inc.
SigmaXL is a leading provider of user friendly Excel Addins for Lean Six Sigma tools and Monte Carlo Simulation. SigmaXL customers include market leaders like Agilent, Diebold, FedEx, Microsoft, Motorola, and Shell. SigmaXL software is also used by numerous colleges, universities and government agencies.
For more information, visit http://www.SigmaXL.com or call 1888SigmaXL (8887446295).
]]>The business is going through cultural transformation in all of its plants. It is implementing a corporate strategy to support common continuous improvement thinking and language across the enterprise – laying the groundwork required for a sustainable continuous improvement culture. The business is using four phases in its continuous improvement rollout, as shown in Figure 1 below.
I. Foundation and the organizational alignment
II. Expansion and discipline
III. Integration and reinforcement
IV. Sustaining momentum
The Kansas City plant, my plant, has completed Phase II and is working its way through Phase III.
The company sent all employees through a simulated work environment (SWE) where they assembled and disassembled wood cars on a real line using real tools and bolts. (This concept was taken from Caterpillar, which went through the same cultural transformation years ago.) This teaches everyone how to use some Lean tools and shows everyone how the team leads will be used in the new environment. During a twoday training, employees eliminated waste between the different runs, and watched their quality and delivery rates improve after each run. After SWE training, the team leads train the employees on what Lean tools are to be used and how to use them. The Lean tools being taught are: 5S, cyclical and noncyclical standard work, total productive maintenance, quick changeover, inventory management, value stream mapping, error proofing, process problem solving, and Kaizen. The company provides an overview of the same tools to all employees so that they understand what team leads are trying to accomplish.
The plant located in Kansas City committed to more than 30,000 hours of training since starting its LSS journey.
The continuous improvement lead at the Kansas City plant was asked how important Lean Six Sigma is for implementing process improvements in manufacturing. He replied that the company’s employees work for its shareholders as a publicly traded company. LSS fits into the strategic pillar: growth, leadership development, continuous improvement and sustainability. The purpose of the LSS program is to develop a continuous improvement culture and mindset through people, processes and systems that enable bestinclass productivity, quality and time to market – all helping to create a better value for the customer. The following examples are a look at some of the ways in which my company is implementing process improvements.
Example 1: Continuous Improvement Waste Elimination
At my company, a tactical manufacturing engineer’s job is to work with the team leads and assist them with problem solving and implementing improvements. The engineers teach the team leads the tools they should use and what data to collect to make sure improvements are made. One project was the team lead’s Yellow Belt project – the redesign of the fender support cell. For his project he had to complete an A3 (a Lean tool for problem solving), which included a problem statement, future state conditions, current conditions and an implementation plan.
The team’s problem statement was that the fender support subassembly had 27.36 seconds of walk time in each cycle, which added up to 9.12 hours of walk time a week (474.24 hours annually), costing the company approximately $10,883 annually in nonvalueadded tasks. The team’s objective was to reduce walk time by 66 percent (18 seconds a cycle), freeing up more time for valueadded tasks without negatively impacting quality. The team determined what area(s) to focus on by doing a pareto chart of the steps taken per cycle and looking at why the steps were necessary (see Figure 2). The team determined that locations of parts and materials were the main reasons for the high number of steps.
As a team they captured the current state layout and performed a spaghetti diagram to show the process flow of the subassembly area. Then they cut out all the material and equipment in the subarea and played around with a layout for a new placement location for the future state. Improvements that were generated from this process included a smaller work table with a material rack built on the back of it, a new layout of the material rowpacks and a buffer table reduced from 48 pieces to 1 piece. Figure 3 presents the layout of the work station – before and after.
As a result of this new layout, the team reduced the walk time. The buffer reduction resulted in an increase of quality products – from 18 average defects per day down to nine. The work envelope was smaller and the Yamazumi board (another Lean tool) reflected a 18second reduction in walk time. The new layout opened up 2,000 square inches of free space on the work floor. This was a good process improvement for the team and the line because it showed that substantial improvements can be made with just a little time and teamwork.
Example 2: O_{2} Sensors Causing HighScrap Costs
The first Green Belt project that I mentored was focused on a quality issue on some O_{2} sensors causing high scrap costs. From January 2, 2014, to March 31, 2014, the nonconforming material (NCM) for O_{2} sensors and exhaust pipes resulting from O_{2} sensors being stripped or cross threaded was 17,394 parts per million (PPM), which cost the company $29,552 annually. The improvements reduced the NCM for O_{2} sensors and exhaust pipes from 17,394 PPM to 2,253 PPM, an 87 percent improvement in performance, saving $26,689 annually.
Following DMAIC (Define, Measure, Analyze, Improve, Control) helped the project stay on track. The root cause of the problem was that the pipe thread quality was not being met, which caused crossthreading during assembly. There was also better capability if the operators hand started the O_{2} sensor before doing the final torque. The Green Belt candidate worked with the suppler to meet print specification on the threads which yielded the 87 percent improvement. Figure 4 presents a control chart showing the beforeandafter data of O_{2} sensors stripped or crossthreaded.
Example 3: Addressing Ergonomics
Six Sigma can also be followed when trying to reduce ergonomic scores. My company has an ergonomic score on every job performed. From January 24, 2013, to December 12, 2013, the handlebar subassembly stations had ergonomic job measurement system (EJMS) with room for improvement; the stations cost the company $131,492 annually. An unused subassembly carousel was repurposed to replace the work tables used to build the handlebars. This eliminated two major handlebar lifts as well as additional handling of the handlebars. EJMS was reduced an average of 27.5 percent, resulting in a cost avoidance of $108,393. Additionally, operator efficiency improved to support a volume increase of 30.4 percent, resulting in a $47,170 cost avoidance for 2014.
Ergonomics is a big part of assembly; any time improvements can be made to reduce ergonomic scores leadership is always on board. These projects can be difficult to implement because equipment can be expensive; repurposing equipment reduces those costs. This change was a big improvement for this line and helped with the efficiency of the operator. Since this has been implemented, we reduced the head count to two operators from three operators. This was due to the lower volume and the higher operator efficiency. This never would have happened if the conveyor was not implemented (Figure 5).
Example 4: Improving the ErrorProofing System
Another project addressed by a Green Belt candidate was improving the errorproofing system that the company uses on the assembly line. The issue was addressed from January 2, 2014, to January 24, 2014; the average number of assembly verification and information system (AVIS) station bypasses was 166 per day on the lines, which cost the company $77,522 annually. The improvement actions taken were to change and verify the configuration settings for the adjusternut AVIS station. The results reduced the number of AVIS bypasses from 43 percent defective (166/day) to 3.7 percent defective (22/day), saving $77,522 annually.
A pareto chart was performed on all the AVIS stations which helped the team focus on the one problem (Figure 6). After following the DMAIC methodology, the Green Belt candidate determined that the station configuration was incorrect and that the recipe program was incorrect. The candidate fixed these two issues and this problem was eliminated. The problem with this project is the sustainability of AVIS knowledge in the manufacturing engineering group. There are only two engineers who know how to program these AVIS stations and it is time consuming to train others on the process.
As asked in The Toyota Way by Jeffrey Liker:
“What do we know about changing a culture?
1. Start from the top – this may require an executive leadership shakeup.
2. Involve from the bottom up.
3. Use middle managers as change agents.
4. It takes time to develop people who really understand and live the philosophy.
5. On a scale of difficulty, it is ‘extremely’ difficult.”
This is what I see at my current company. Enacting a changeimprovement culture takes time, some leadership changes have occurred, the company lacks middle managers as change agents and change is extremely difficult.
Lean coupled with Six Sigma tools drive decisionmaking by data and metrics and provides a mechanism to quantify the potential for variation, defects and risk – as well as valueadded and resource optimization before implementing actual changes.
Leadership must be involved in order to understand and communicate the importance of LSS and its deployment. Leadership must support a transformation and lead employees in the change. If projects are completed and the results are shared to the “nonbelievers,” a LSS implementation will be successful.
Middle management is focused on making improvements and ensuring bottomline savings, which can lead to a need for lower staffing levels. This is a problem at my company not only with supervisors, but also with manufacturing engineers. They are doing the same amount of, or more, work as before with fewer middlemanagement employees. Middle managers can be overstressed and overworked, leading to increased turnover.
LSS is the future of current company and it will be a long journey to become fully implemented into the continuous improvement culture; however, LSS process improvements will help the company keep costs low, quality high and customer satisfaction high.
]]>Classical statistical process control (SPC) methods, such as individual and moving range, Xbar and R charts, were developed in the era of mass production of identical parts. Production runs lasted for weeks, months and even years. Many SPC rules of thumb were created for this environment (as noted in The Six Sigma Handbook by Thomas Pyzdek). This may not have been a problem in lowmix, highvolume production, but it is impractical or impossible in today’s highmix, lowvolume production. In a lowvolume, highmix situation the entire production run can be fewer parts than are required to start a standard control chart. Standard SPC methods can be modified slightly to work with small runs.
As a rule of thumb, if at least 10 different values occur and repeat values make up no more than 20 percent of the data set, data can be considered variable. Otherwise the data is considered to be discrete and attribute control charts should be used.1 There are several approaches for short runs using variable data, but the ZMR chart is preferred because all the subgroups are used; other methods exclude subgroups. The following explains what the ZMR chart is and how practitioners can use it.
Statisticians and engineers often use normalizing transformations. Sigma level and process capability are two common applications of normalizing transforms. Sigma level is the same thing as Zvalue – this normalization is simply the number of standard deviations from a value of interest and the mean of the data. The Zvalue can be used to create control charts that are independent of the units of measure. Several different characteristics can be plotted on the same control chart as long as they are produced in a similar process. Zcharts are independent of the units of measure and can be thought of as true process control charts. A ZMR chart can be used with shortrun processes when there is not enough data in each run to calculate proper control limits. ZMR charts standardize the measurement data by subtracting the mean to center the data, then dividing by the standard deviation.
Standardizing allows a practitioner to evaluate data from different runs by interpreting a single control chart. The Zchart option is supported by Minitab (and other statistical software products). The standardized data comes from a population with the mean = 0 and the standard deviation = 1. With that, a single plot can be used for the standardized data from different parts or products. The resulting control chart has a center line at 0, an upper limit at +3 and a lower limit at 3.
Example of a Zchart
A specialty manufacturer of pickandplace heads for small parts has a new process for making a vacuum orifice. This process is being used on eight parts with differentsized orifices ranging from 10 microns to 30 microns in diameter. These parts are hard to measure and are run in small batches. There are, thus, few samples to study but whether the process is stable and controlled needs to be understood. (Note that the measurement system has been validated.)
The two parts of Table 1 show the first set of data.
Table 1: Sample Data Set (Part 1)  
Part Numbers  Measurement 
1  10.34 
1  9.23 
1  10.54 
1  9.84 
1  10.30 
2  17.56 
2  19.26 
2  22.72 
2  18.45 
2  21.42 
3  25.08 
3  25.02 
3  24.46 
3  24.80 
3  24.39 
4  20.01 
4  19.93 
4  19.96 
4  19.97 
4  19.89 
5  10.58 
5  9.12 
5  10.67 
5  10.38 
5  10.39 
6  29.37 
6  29.43 
6  30.16 
6  31.56 
6  30.23 
7  29.52 
7  30.56 
7  26.59 
7  27.57 
7  29.66 
“>8  9.57 
8  9.90 
8  10.20 
8  13.50 
8  12.67 
Table 1: Sample Data Set (Part 2)  
Part Number  Mean  Standard Deviation  Range 
1  10.049  0.523  1.303 
2  19.88  2.136  5.16 
3  27.749  0.315  0.686 
4  19.953  0.0438  0.116 
5  10.229  0.632  1.552 
6  30.149  0.883  2.185 
7  28.782  1.639  3.969 
8  11.168  1.788  3.932 
If this data is put in an individual and moving range (IMR) chart, the result has little meaning as there is not enough data to calculate statistically correct control limits. A Zchart (Figure 1) can overcome this limitation.
To build the Zchart using Minitab: choose Stat > Control Charts > Variables Charts for Individuals > ZMR Chart. In Variables, select Measurement and Part Numbers for part indicator.
Minitab provides four methods for estimating , process standard deviations. Choose an estimation method based on the properties of the particular process or product at hand. Or enter a historical value. The data plotted in the Zchart is Z_{i} where . Make assumptions about the process variation, but note that this should not be taken lightly as results will differ between assumptions. Based on the assumptions made, the estimate of standard deviation changes.
Use Table 2 to help select a method of estimation.
Table 2: How to Select a Method of Estimation  
Method Type  Use When  Does This 
Constant (pool all data)  All the output from the process has the same variance – regardless of the size of the measurement  Pools all the data across runs and parts to obtain a common estimate of s 
Relative to size (pool all data, use log [data])  The variance increases in a fairly constant manner as the size of the measurement increases 

By parts (pool all runs of same part/batch)  All runs of a particular part or product have the same variance  Combines all runs of the same part or product to estimate s 
By runs (no pooling) * default option  It cannot be assumed that all runs of a particular part or product have the same variance  Estimates s for each run independently 
Under ZMR Options, select estimates and pick “by Runs” (default) as equal variance cannot be assumed, as shown in Figure 3.
Click OK and OK again. The resulting control chart is shown in Figure 4.
The process is stable and in control.
There are two issues with plotting attribute data from shortrun processes.
Because of these difficulties, many people believe that SPC is practical only for long, highvolume runs. This is not necessarily true. In many cases, stabilized attribute charts can eliminate both of these problems. The downsides to stabilized charts are that they are more complicated to develop and there are not standard options in most common statistical software. These charts must be made manually or a macro must be created. They may require more effort but they can be useful.
Stabilized attribute charts may be used if a process is producing parts or features that are similar. Stabilized (Z) attribute control charts also solve the issue of varying control limits and central lines due to varying sample sizes, making the chart easy to visibly interpret.
Here is a typical scenario: A jobshop welding operation produces small quantities of custom items. The operation, however, always involves joining parts of similar material and similar size. The process control statistic is weld imperfections per 100 inches of weld.
Methods used to create stabilized (Z) attribute control charts are all based on their corresponding classical longrun attribute control chart methods. There are four basic types of control charts involved:
All of these charts are based on the following transformation:
Stabilized (Z) attribute charts can be used for longrun u and p charts with varying sample sizes. This can be used to eliminate the varying and confusing control limits.
For example, 10 part numbers run in different small runs. The parts are similar but different. The number of defective units has been recorded and it is desired to determine if the process is in control. Table 4 displays the application of the formulae above. The calculated Z scores can then be plotted and compared to ±3 standard deviations. As all our values fall within ±3, our process is in statistical control for defective units.
Table 4: Results of Small Run Results  
Part Number  Sample Size  Defectives (np)  p  Z  UCL  LCL  
1  10  1  0.1  1  0.942809  0.942809  0  3  3 
2  15  2  0.133333  1.06066  3  3  
3  20  2  0.1  1.06066  3  3  
4  15  1  0.066667  0  3  3  
5  8  1  0.125  0  3  3  
6  10  1  0.1  0  3  3  
7  12  1  0.083333  0  3  3  
8  15  0  0  1.06066  3  3  
9  10  0  0  1.06066  3  3 
SPC can be used for short production runs and may be helpful in any operation. At a minimum, these charts are more tools to include in the continuous improvement toolbox.
]]>Avoid Two Common Mistakes in Measurement System Analysis
Learn two of the common mistakes made during measurement system analysis and how to avoid them.
Run Charts: A Simple and Powerful Tool for Process Improvement
Among other benefits, a run chart is used to determine whether the central tendency of a process is changing. Learn stepbystep how to create and then interpret a run chart.
Six Lessons for Deploying a BPM Workflow Product in a Transactional Environment
With limited visibility of tasks and plenty of room for nonstandardized methods, transactional processes can be a challenge to manage. Here are six lessons for building a BPM workflow to improve a transactional operation.
Six Steps to Effectively Plan for Lean Six Sigma Efforts
Follow these six steps to successfully launch – and maintain – a continuous improvement program.
Using Monte Carlo Simulation to Manage Schedule Risk
Monte Carlo simulation provides a probabilistic analysis of a project schedule, helping project managers make the best decisions to keep work on track.
]]>
In a set of data, mean (μ) and standard deviation (σ) are defined as:
μ = x_{1} + x_{2} + x_{3} + … + x_{n}) / n
Where x_{1} , x_{2} , … , x_{n} are data values and n is the number of data elements, and
Standard deviation shows the extent of variation or spread of data. A larger standard deviation indicates that a data set has a wider spread around its mean. Process data usually has a normal distribution. The distance from the mean μ to a data value in terms of data units can be measured. For example, a data point with a value of x = 31 seconds is 6 seconds away from a mean value of 25 seconds. This distance can also be measured by counting the number of standard deviations in the distance. If the standard deviation is 2 seconds, the same point is 6/2 or 3 standard deviations away from the mean. This count is denoted by sigma level, Z, also known as Zscore, as shown below.
Z = (x – μ) / σ
Z = (31 25) / 2 = 3
In a process, deviations from the target or mean are accepted to a certain value defined by the specification limits (SL) around the mean. Any value beyond the specification limit indicates a defect or unacceptable result. The farther the specification limits are from the mean, the lower the chance of defects.
A Six Sigma process has a specification limit which is 6 times its sigma (standard deviation) away from its mean. Therefore, a process data point can be 6 standard deviations from the mean and still be acceptable. (See Figure 1.)
In a stable process, the mean naturally shifts as much as 1.5 sigma in the long term on either side of its shortterm value. The red lines in Figure 2 (below) show the extreme case of 1.5sigma mean shift to the right. The right specification limit is at 4.5 sigma from the mean with a defect rate of 3.4 parts per million (PPM). The left specification limit is at 7.5 sigma from the mean with a defect rate of 0 PPM. The overall defect rate, therefore, is 3.4 PPM. A similar argument applies to the extreme case of 1.5sigma shift to the left. A Six Sigma process is actually 4.5 sigma in the long term, and the 3.4 PPM defect rate is the 1sided probability of having a data value beyond 4.5 sigma measured from the shortterm mean.
The 1.5sigma shift makes defects approach 0 on the opposite side of the shift even at lower sigma levels. The onesided defect rate is applicable to any capable process with 1sided or 2sided SLs, even at a 3sigma level.
Given the specification limit, SL, the process sigma level, or process Z, is:
Z = (x – μ) / σ = (SL – μ) / σ
In this example, the process sigma level for a specification limit of 31 seconds is:
Z = (SL – μ) / σ
Z = (31 – 25) / 2 = 3
Therefore, the process is at a 3sigma quality level. In order to bring the process to the golden Six Sigma quality level, the process sigma would have to be reduced to 1.
Z = (31 – 25) / 1 = 6
In general, the Z formula can be rearranged to calculate the maximum allowable process sigma, or standard deviation, for any sigma level.
Z = (x – μ) / σ
σ = (x – μ ) / Z
For example, given a mean of 25 seconds and SL of 31 seconds, for a Six Sigma quality level, the required process sigma is calculated as:
σ = (31 – 25) / 6 = 1
Similarly, for a 3sigma quality level, the process sigma must be:
σ = (31 – 25 ) / 3 = 2
Referring back to the short and longterm behavior of the process mean, there are 2 values for Z, shortterm Z, or Z_{st}, and longterm Z, or Z_{lt}.
Z_{lt} = Z_{st} – 1.5
Z_{st} = 6
Z_{lt }= Z_{st} – 1.5 = 4.5
Sometimes the term process sigma is used instead of the process sigma level, which may cause confusion. Process sigma indicates the process variation (i.e., standard deviation) and is measured in terms of data units (such as seconds or millimeters), while process sigma count Z, or process sigma level, is a count with no unit of measure.
Another measure of process quality is process capability, or C_{p}, which is the specification width (distance between the specification limits) divided by 6 times the standard deviation.
C_{p} = (Upper SL – Lower SL) / 6σ
The recommended minimum or acceptable value of C_{p} is 1.33. In terms of Six Sigma, this process capability is equivalent to a sigma level of 4 and longterm defect rate of 6,210 PPM. Process capability for a Six Sigma process is 2.
]]>