Depending on the environment, organization’s maturity, people and processes that must be dealt with, LSS practitioners may be in situations that prevent them from following a textbook project. These situations may include:
Assuming one or more of the above elements are true, running a project can be a challenge. Not only can everything one has learned be viewed as a waste of potential and time, but a project may also be seen as too difficult to attempt. A simple way to turn these challenges into an opportunity is by extracting the maximum from the existing situation instead of fighting against it. A couple of examples demonstrate that using individual tools when they fit the purpose can be as rewarding as applying the whole framework – by the book.
Every LSS training or guideline instructs to start a project by analyzing the voice of the customer (VOC). But what if there is no project and the practitioner does not interact with the endproduct customer? There is a trick to adjust the VOC tool to help improve the organization.
Treat the SMEs as the customers. Let’s take the simplest scenario where one LSS expert is assigned to a team of SMEs attempting to improve their own processes. The objective here is threefold:
When it comes to implementing these principles, as with any customer feedback, the key is to establish a structured method for gathering, storing and reviewing the improvement ideas. The following are some tips that can assist in building a simple mechanism to manage a team’s improvement ideas:
With numerous “customers,” this process is more complicated; however, given team discipline and collaboration it can turn into success. The end goal should be for the team to become selfsufficient in improving their processes when the LSS expert is no longer available.
VOC is not the only instrument that teams can use by themselves. Other examples of useful tools that every team can use in everyday work include project management documents that bring structure to every initiative.
Even if LSS is not used in everyday operations, a smart expert can still smuggle a few useful tools into the workplace. This is because every organization runs projects; all projects typically bring change and opportunities for improvement. As these tools are simple and universal, no matter what methodology an organization uses, LSS best practices around project management documentation can often be the first big win. This can apply to any initiative, starting with a local team project and finishing with a global organizational change. Here are some examples of tools that each person running a project should befriend:
What if the LSS expert is assigned to a team that does not run any projects? One might argue that the space for improving their operations is limited. There is a tool, though, that can be applied in any circumstances and implemented by the team independently – 5S (sort, set in order, shine, standardize, sustain) only sounds like one tool, but it is by far one of the most helpful. Apart from the visible improvements 5S offers in each of the five phases of DMAIC (Define, Measure, Analyze, Improve, Control), 5S offers plenty of opportunities to embed the continuous improvement mindset quickly and effectively. (Again, if it is not possible to use all the elements at once, fitforpurpose is the most sensible approach to follow.)
In operations like human resources, finance or outsourcing, some 5S techniques can be applied as successfully as in manufacturing. Good analogies for a service environment relate to virtual workplaces. Some examples include setting up document repositories and shared locations, standardizing service inputs or outputs, and keeping the PC workplace tidy.
The previous examples demonstrate how to improve an existing process with little effort. But what if the process is not there yet? In such instances, the LSS expert might be asked to design and implement an activity that was not previously performed.
When establishing a new function, going through reorganization or simply starting a new activity, a couple of LSS tools can be utilized to help define and document the change taking place.
When the reality is different from what was taught during training, the choices are to give up or adjust one’s approach. By using a fitforpurpose approach and simplifying the tools, it is easier to make the tools easier to remember and, therefore, encourage the staff to use them more often. Many small improvements have a big chance of translating into a continuous improvement culture for the whole organization.
]]>Press Contact
Diane Tilley
(888) 7446295
Kitchener, Ontario February 27, 2017—SigmaXL Inc., a leading provider of user friendly Excel Addins for Statistical and Graphical analysis, announces the release of SigmaXL Version 8.
“SigmaXL was designed from the ground up to be a costeffective, powerful, but easy to use tool that enables users to measure, analyze, improve and control their service, transactional, and manufacturing processes. As an addin to the already familiar Microsoft Excel, SigmaXL is ideal for Lean Six Sigma training or use in a college statistics course. Our slogan for Version 8 is Multiple Comparisons Made Easy,” said John Noguera, CTO, SigmaXL.
New features in Version 8 include:
Dr. Peter Wludyka, coauthor of the book, The Analysis of Means: A Graphical Method for Comparing Means, Rates, and Proportions, “I am happy to endorse the ANOM charts introduced in SigmaXL Version 8. They are easy to use and accurately handle balanced and unbalanced data. We collaborated to extend Multiway Slicing to Binomial and Poisson and these are included in the TwoWay charts, where SigmaXL automatically recommends Slice Charts when the interaction is significant.”A free 30day trial version is available for download from the SigmaXL website at: www.SigmaXL.com.
About SigmaXL Inc.
SigmaXL is a leading provider of user friendly Excel Addins for Lean Six Sigma tools and Monte Carlo Simulation. SigmaXL customers include market leaders like Agilent, Diebold, FedEx, Microsoft, Motorola, and Shell. SigmaXL software is also used by numerous colleges, universities and government agencies.
For more information, visit http://www.SigmaXL.com or call 1888SigmaXL (8887446295).
]]>The business is going through cultural transformation in all of its plants. It is implementing a corporate strategy to support common continuous improvement thinking and language across the enterprise – laying the groundwork required for a sustainable continuous improvement culture. The business is using four phases in its continuous improvement rollout, as shown in Figure 1 below.
I. Foundation and the organizational alignment
II. Expansion and discipline
III. Integration and reinforcement
IV. Sustaining momentum
The Kansas City plant, my plant, has completed Phase II and is working its way through Phase III.
The company sent all employees through a simulated work environment (SWE) where they assembled and disassembled wood cars on a real line using real tools and bolts. (This concept was taken from Caterpillar, which went through the same cultural transformation years ago.) This teaches everyone how to use some Lean tools and shows everyone how the team leads will be used in the new environment. During a twoday training, employees eliminated waste between the different runs, and watched their quality and delivery rates improve after each run. After SWE training, the team leads train the employees on what Lean tools are to be used and how to use them. The Lean tools being taught are: 5S, cyclical and noncyclical standard work, total productive maintenance, quick changeover, inventory management, value stream mapping, error proofing, process problem solving, and Kaizen. The company provides an overview of the same tools to all employees so that they understand what team leads are trying to accomplish.
The plant located in Kansas City committed to more than 30,000 hours of training since starting its LSS journey.
The continuous improvement lead at the Kansas City plant was asked how important Lean Six Sigma is for implementing process improvements in manufacturing. He replied that the company’s employees work for its shareholders as a publicly traded company. LSS fits into the strategic pillar: growth, leadership development, continuous improvement and sustainability. The purpose of the LSS program is to develop a continuous improvement culture and mindset through people, processes and systems that enable bestinclass productivity, quality and time to market – all helping to create a better value for the customer. The following examples are a look at some of the ways in which my company is implementing process improvements.
Example 1: Continuous Improvement Waste Elimination
At my company, a tactical manufacturing engineer’s job is to work with the team leads and assist them with problem solving and implementing improvements. The engineers teach the team leads the tools they should use and what data to collect to make sure improvements are made. One project was the team lead’s Yellow Belt project – the redesign of the fender support cell. For his project he had to complete an A3 (a Lean tool for problem solving), which included a problem statement, future state conditions, current conditions and an implementation plan.
The team’s problem statement was that the fender support subassembly had 27.36 seconds of walk time in each cycle, which added up to 9.12 hours of walk time a week (474.24 hours annually), costing the company approximately $10,883 annually in nonvalueadded tasks. The team’s objective was to reduce walk time by 66 percent (18 seconds a cycle), freeing up more time for valueadded tasks without negatively impacting quality. The team determined what area(s) to focus on by doing a pareto chart of the steps taken per cycle and looking at why the steps were necessary (see Figure 2). The team determined that locations of parts and materials were the main reasons for the high number of steps.
As a team they captured the current state layout and performed a spaghetti diagram to show the process flow of the subassembly area. Then they cut out all the material and equipment in the subarea and played around with a layout for a new placement location for the future state. Improvements that were generated from this process included a smaller work table with a material rack built on the back of it, a new layout of the material rowpacks and a buffer table reduced from 48 pieces to 1 piece. Figure 3 presents the layout of the work station – before and after.
As a result of this new layout, the team reduced the walk time. The buffer reduction resulted in an increase of quality products – from 18 average defects per day down to nine. The work envelope was smaller and the Yamazumi board (another Lean tool) reflected a 18second reduction in walk time. The new layout opened up 2,000 square inches of free space on the work floor. This was a good process improvement for the team and the line because it showed that substantial improvements can be made with just a little time and teamwork.
Example 2: O_{2} Sensors Causing HighScrap Costs
The first Green Belt project that I mentored was focused on a quality issue on some O_{2} sensors causing high scrap costs. From January 2, 2014, to March 31, 2014, the nonconforming material (NCM) for O_{2} sensors and exhaust pipes resulting from O_{2} sensors being stripped or cross threaded was 17,394 parts per million (PPM), which cost the company $29,552 annually. The improvements reduced the NCM for O_{2} sensors and exhaust pipes from 17,394 PPM to 2,253 PPM, an 87 percent improvement in performance, saving $26,689 annually.
Following DMAIC (Define, Measure, Analyze, Improve, Control) helped the project stay on track. The root cause of the problem was that the pipe thread quality was not being met, which caused crossthreading during assembly. There was also better capability if the operators hand started the O_{2} sensor before doing the final torque. The Green Belt candidate worked with the suppler to meet print specification on the threads which yielded the 87 percent improvement. Figure 4 presents a control chart showing the beforeandafter data of O_{2} sensors stripped or crossthreaded.
Example 3: Addressing Ergonomics
Six Sigma can also be followed when trying to reduce ergonomic scores. My company has an ergonomic score on every job performed. From January 24, 2013, to December 12, 2013, the handlebar subassembly stations had ergonomic job measurement system (EJMS) with room for improvement; the stations cost the company $131,492 annually. An unused subassembly carousel was repurposed to replace the work tables used to build the handlebars. This eliminated two major handlebar lifts as well as additional handling of the handlebars. EJMS was reduced an average of 27.5 percent, resulting in a cost avoidance of $108,393. Additionally, operator efficiency improved to support a volume increase of 30.4 percent, resulting in a $47,170 cost avoidance for 2014.
Ergonomics is a big part of assembly; any time improvements can be made to reduce ergonomic scores leadership is always on board. These projects can be difficult to implement because equipment can be expensive; repurposing equipment reduces those costs. This change was a big improvement for this line and helped with the efficiency of the operator. Since this has been implemented, we reduced the head count to two operators from three operators. This was due to the lower volume and the higher operator efficiency. This never would have happened if the conveyor was not implemented (Figure 5).
Example 4: Improving the ErrorProofing System
Another project addressed by a Green Belt candidate was improving the errorproofing system that the company uses on the assembly line. The issue was addressed from January 2, 2014, to January 24, 2014; the average number of assembly verification and information system (AVIS) station bypasses was 166 per day on the lines, which cost the company $77,522 annually. The improvement actions taken were to change and verify the configuration settings for the adjusternut AVIS station. The results reduced the number of AVIS bypasses from 43 percent defective (166/day) to 3.7 percent defective (22/day), saving $77,522 annually.
A pareto chart was performed on all the AVIS stations which helped the team focus on the one problem (Figure 6). After following the DMAIC methodology, the Green Belt candidate determined that the station configuration was incorrect and that the recipe program was incorrect. The candidate fixed these two issues and this problem was eliminated. The problem with this project is the sustainability of AVIS knowledge in the manufacturing engineering group. There are only two engineers who know how to program these AVIS stations and it is time consuming to train others on the process.
As asked in The Toyota Way by Jeffrey Liker:
“What do we know about changing a culture?
1. Start from the top – this may require an executive leadership shakeup.
2. Involve from the bottom up.
3. Use middle managers as change agents.
4. It takes time to develop people who really understand and live the philosophy.
5. On a scale of difficulty, it is ‘extremely’ difficult.”
This is what I see at my current company. Enacting a changeimprovement culture takes time, some leadership changes have occurred, the company lacks middle managers as change agents and change is extremely difficult.
Lean coupled with Six Sigma tools drive decisionmaking by data and metrics and provides a mechanism to quantify the potential for variation, defects and risk – as well as valueadded and resource optimization before implementing actual changes.
Leadership must be involved in order to understand and communicate the importance of LSS and its deployment. Leadership must support a transformation and lead employees in the change. If projects are completed and the results are shared to the “nonbelievers,” a LSS implementation will be successful.
Middle management is focused on making improvements and ensuring bottomline savings, which can lead to a need for lower staffing levels. This is a problem at my company not only with supervisors, but also with manufacturing engineers. They are doing the same amount of, or more, work as before with fewer middlemanagement employees. Middle managers can be overstressed and overworked, leading to increased turnover.
LSS is the future of current company and it will be a long journey to become fully implemented into the continuous improvement culture; however, LSS process improvements will help the company keep costs low, quality high and customer satisfaction high.
]]>Classical statistical process control (SPC) methods, such as individual and moving range, Xbar and R charts, were developed in the era of mass production of identical parts. Production runs lasted for weeks, months and even years. Many SPC rules of thumb were created for this environment (as noted in The Six Sigma Handbook by Thomas Pyzdek). This may not have been a problem in lowmix, highvolume production, but it is impractical or impossible in today’s highmix, lowvolume production. In a lowvolume, highmix situation the entire production run can be fewer parts than are required to start a standard control chart. Standard SPC methods can be modified slightly to work with small runs.
As a rule of thumb, if at least 10 different values occur and repeat values make up no more than 20 percent of the data set, data can be considered variable. Otherwise the data is considered to be discrete and attribute control charts should be used.1 There are several approaches for short runs using variable data, but the ZMR chart is preferred because all the subgroups are used; other methods exclude subgroups. The following explains what the ZMR chart is and how practitioners can use it.
Statisticians and engineers often use normalizing transformations. Sigma level and process capability are two common applications of normalizing transforms. Sigma level is the same thing as Zvalue – this normalization is simply the number of standard deviations from a value of interest and the mean of the data. The Zvalue can be used to create control charts that are independent of the units of measure. Several different characteristics can be plotted on the same control chart as long as they are produced in a similar process. Zcharts are independent of the units of measure and can be thought of as true process control charts. A ZMR chart can be used with shortrun processes when there is not enough data in each run to calculate proper control limits. ZMR charts standardize the measurement data by subtracting the mean to center the data, then dividing by the standard deviation.
Standardizing allows a practitioner to evaluate data from different runs by interpreting a single control chart. The Zchart option is supported by Minitab (and other statistical software products). The standardized data comes from a population with the mean = 0 and the standard deviation = 1. With that, a single plot can be used for the standardized data from different parts or products. The resulting control chart has a center line at 0, an upper limit at +3 and a lower limit at 3.
Example of a Zchart
A specialty manufacturer of pickandplace heads for small parts has a new process for making a vacuum orifice. This process is being used on eight parts with differentsized orifices ranging from 10 microns to 30 microns in diameter. These parts are hard to measure and are run in small batches. There are, thus, few samples to study but whether the process is stable and controlled needs to be understood. (Note that the measurement system has been validated.)
The two parts of Table 1 show the first set of data.
Table 1: Sample Data Set (Part 1)  
Part Numbers  Measurement 
1  10.34 
1  9.23 
1  10.54 
1  9.84 
1  10.30 
2  17.56 
2  19.26 
2  22.72 
2  18.45 
2  21.42 
3  25.08 
3  25.02 
3  24.46 
3  24.80 
3  24.39 
4  20.01 
4  19.93 
4  19.96 
4  19.97 
4  19.89 
5  10.58 
5  9.12 
5  10.67 
5  10.38 
5  10.39 
6  29.37 
6  29.43 
6  30.16 
6  31.56 
6  30.23 
7  29.52 
7  30.56 
7  26.59 
7  27.57 
7  29.66 
“>8  9.57 
8  9.90 
8  10.20 
8  13.50 
8  12.67 
Table 1: Sample Data Set (Part 2)  
Part Number  Mean  Standard Deviation  Range 
1  10.049  0.523  1.303 
2  19.88  2.136  5.16 
3  27.749  0.315  0.686 
4  19.953  0.0438  0.116 
5  10.229  0.632  1.552 
6  30.149  0.883  2.185 
7  28.782  1.639  3.969 
8  11.168  1.788  3.932 
If this data is put in an individual and moving range (IMR) chart, the result has little meaning as there is not enough data to calculate statistically correct control limits. A Zchart (Figure 1) can overcome this limitation.
To build the Zchart using Minitab: choose Stat > Control Charts > Variables Charts for Individuals > ZMR Chart. In Variables, select Measurement and Part Numbers for part indicator.
Minitab provides four methods for estimating , process standard deviations. Choose an estimation method based on the properties of the particular process or product at hand. Or enter a historical value. The data plotted in the Zchart is Z_{i} where . Make assumptions about the process variation, but note that this should not be taken lightly as results will differ between assumptions. Based on the assumptions made, the estimate of standard deviation changes.
Use Table 2 to help select a method of estimation.
Table 2: How to Select a Method of Estimation  
Method Type  Use When  Does This 
Constant (pool all data)  All the output from the process has the same variance – regardless of the size of the measurement  Pools all the data across runs and parts to obtain a common estimate of s 
Relative to size (pool all data, use log [data])  The variance increases in a fairly constant manner as the size of the measurement increases 

By parts (pool all runs of same part/batch)  All runs of a particular part or product have the same variance  Combines all runs of the same part or product to estimate s 
By runs (no pooling) * default option  It cannot be assumed that all runs of a particular part or product have the same variance  Estimates s for each run independently 
Under ZMR Options, select estimates and pick “by Runs” (default) as equal variance cannot be assumed, as shown in Figure 3.
Click OK and OK again. The resulting control chart is shown in Figure 4.
The process is stable and in control.
There are two issues with plotting attribute data from shortrun processes.
Because of these difficulties, many people believe that SPC is practical only for long, highvolume runs. This is not necessarily true. In many cases, stabilized attribute charts can eliminate both of these problems. The downsides to stabilized charts are that they are more complicated to develop and there are not standard options in most common statistical software. These charts must be made manually or a macro must be created. They may require more effort but they can be useful.
Stabilized attribute charts may be used if a process is producing parts or features that are similar. Stabilized (Z) attribute control charts also solve the issue of varying control limits and central lines due to varying sample sizes, making the chart easy to visibly interpret.
Here is a typical scenario: A jobshop welding operation produces small quantities of custom items. The operation, however, always involves joining parts of similar material and similar size. The process control statistic is weld imperfections per 100 inches of weld.
Methods used to create stabilized (Z) attribute control charts are all based on their corresponding classical longrun attribute control chart methods. There are four basic types of control charts involved:
All of these charts are based on the following transformation:
Stabilized (Z) attribute charts can be used for longrun u and p charts with varying sample sizes. This can be used to eliminate the varying and confusing control limits.
For example, 10 part numbers run in different small runs. The parts are similar but different. The number of defective units has been recorded and it is desired to determine if the process is in control. Table 4 displays the application of the formulae above. The calculated Z scores can then be plotted and compared to ±3 standard deviations. As all our values fall within ±3, our process is in statistical control for defective units.
Table 4: Results of Small Run Results  
Part Number  Sample Size  Defectives (np)  p  Z  UCL  LCL  
1  10  1  0.1  1  0.942809  0.942809  0  3  3 
2  15  2  0.133333  1.06066  3  3  
3  20  2  0.1  1.06066  3  3  
4  15  1  0.066667  0  3  3  
5  8  1  0.125  0  3  3  
6  10  1  0.1  0  3  3  
7  12  1  0.083333  0  3  3  
8  15  0  0  1.06066  3  3  
9  10  0  0  1.06066  3  3 
SPC can be used for short production runs and may be helpful in any operation. At a minimum, these charts are more tools to include in the continuous improvement toolbox.
]]>Avoid Two Common Mistakes in Measurement System Analysis
Learn two of the common mistakes made during measurement system analysis and how to avoid them.
Run Charts: A Simple and Powerful Tool for Process Improvement
Among other benefits, a run chart is used to determine whether the central tendency of a process is changing. Learn stepbystep how to create and then interpret a run chart.
Six Lessons for Deploying a BPM Workflow Product in a Transactional Environment
With limited visibility of tasks and plenty of room for nonstandardized methods, transactional processes can be a challenge to manage. Here are six lessons for building a BPM workflow to improve a transactional operation.
Six Steps to Effectively Plan for Lean Six Sigma Efforts
Follow these six steps to successfully launch – and maintain – a continuous improvement program.
Using Monte Carlo Simulation to Manage Schedule Risk
Monte Carlo simulation provides a probabilistic analysis of a project schedule, helping project managers make the best decisions to keep work on track.
]]>
In a set of data, mean (μ) and standard deviation (σ) are defined as:
μ = x_{1} + x_{2} + x_{3} + … + x_{n}) / n
Where x_{1} , x_{2} , … , x_{n} are data values and n is the number of data elements, and
Standard deviation shows the extent of variation or spread of data. A larger standard deviation indicates that a data set has a wider spread around its mean. Process data usually has a normal distribution. The distance from the mean μ to a data value in terms of data units can be measured. For example, a data point with a value of x = 31 seconds is 6 seconds away from a mean value of 25 seconds. This distance can also be measured by counting the number of standard deviations in the distance. If the standard deviation is 2 seconds, the same point is 6/2 or 3 standard deviations away from the mean. This count is denoted by sigma level, Z, also known as Zscore, as shown below.
Z = (x – μ) / σ
Z = (31 25) / 2 = 3
In a process, deviations from the target or mean are accepted to a certain value defined by the specification limits (SL) around the mean. Any value beyond the specification limit indicates a defect or unacceptable result. The farther the specification limits are from the mean, the lower the chance of defects.
A Six Sigma process has a specification limit which is 6 times its sigma (standard deviation) away from its mean. Therefore, a process data point can be 6 standard deviations from the mean and still be acceptable. (See Figure 1.)
In a stable process, the mean naturally shifts as much as 1.5 sigma in the long term on either side of its shortterm value. The red lines in Figure 2 (below) show the extreme case of 1.5sigma mean shift to the right. The right specification limit is at 4.5 sigma from the mean with a defect rate of 3.4 parts per million (PPM). The left specification limit is at 7.5 sigma from the mean with a defect rate of 0 PPM. The overall defect rate, therefore, is 3.4 PPM. A similar argument applies to the extreme case of 1.5sigma shift to the left. A Six Sigma process is actually 4.5 sigma in the long term, and the 3.4 PPM defect rate is the 1sided probability of having a data value beyond 4.5 sigma measured from the shortterm mean.
The 1.5sigma shift makes defects approach 0 on the opposite side of the shift even at lower sigma levels. The onesided defect rate is applicable to any capable process with 1sided or 2sided SLs, even at a 3sigma level.
Given the specification limit, SL, the process sigma level, or process Z, is:
Z = (x – μ) / σ = (SL – μ) / σ
In this example, the process sigma level for a specification limit of 31 seconds is:
Z = (SL – μ) / σ
Z = (31 – 25) / 2 = 3
Therefore, the process is at a 3sigma quality level. In order to bring the process to the golden Six Sigma quality level, the process sigma would have to be reduced to 1.
Z = (31 – 25) / 1 = 6
In general, the Z formula can be rearranged to calculate the maximum allowable process sigma, or standard deviation, for any sigma level.
Z = (x – μ) / σ
σ = (x – μ ) / Z
For example, given a mean of 25 seconds and SL of 31 seconds, for a Six Sigma quality level, the required process sigma is calculated as:
σ = (31 – 25) / 6 = 1
Similarly, for a 3sigma quality level, the process sigma must be:
σ = (31 – 25 ) / 3 = 2
Referring back to the short and longterm behavior of the process mean, there are 2 values for Z, shortterm Z, or Z_{st}, and longterm Z, or Z_{lt}.
Z_{lt} = Z_{st} – 1.5
Z_{st} = 6
Z_{lt }= Z_{st} – 1.5 = 4.5
Sometimes the term process sigma is used instead of the process sigma level, which may cause confusion. Process sigma indicates the process variation (i.e., standard deviation) and is measured in terms of data units (such as seconds or millimeters), while process sigma count Z, or process sigma level, is a count with no unit of measure.
Another measure of process quality is process capability, or C_{p}, which is the specification width (distance between the specification limits) divided by 6 times the standard deviation.
C_{p} = (Upper SL – Lower SL) / 6σ
The recommended minimum or acceptable value of C_{p} is 1.33. In terms of Six Sigma, this process capability is equivalent to a sigma level of 4 and longterm defect rate of 6,210 PPM. Process capability for a Six Sigma process is 2.
]]>Most wellrun companies spend a significant proportion of management time in planning. Budgets, production, new products and other important elements of the business plan are well thought through and tracked. Unfortunately, many of these same companies do not apply this same discipline to their continuous improvement activities. Green Belts (GBs) and Black Belts (BBs) are trained to employ the plandocheckact (PDCA) cycle, but the organization then pushes the Belts to do projects that are not well planned. Change is undertaken without a clear understanding of how these projects fit into the longterm business plan. The result is reactive problem solving that may not have a lasting and significant impact on the longterm health of the organization. Effective management of continuous improvement, however, is primarily driven by good planning. Failing to integrate continuous improvement into the overall business planning cycle is a leading cause of failed deployments.
There are many reasons why organizations do not fully integrate their improvement programs into the business planning cycle. Sometimes it is a confidence problem – when the program is new, leadership may not believe it will deliver the promised results. Sometimes it is a trust issue – if the program is being run by an outside consultant, leadership may be hesitant to share strategic plans. Most often, however, it is an oversight – LSS is not considered a core part of the operation and, thus, is not fully integrated into the plan.
Regardless of the reason, failing to incorporate improvement initiatives into the plans used to run the business results in dysfunctional behavior. These dysfunctions sometimes look like solid management of the LSS program. Most, in fact, are good components of a proper portfolio plan, but since the program is incomplete the components tend to drive the wrong behavior. The distinction is whether the components are proactive and longlived within the organization.
For example, many companies charge one functional area (such as finance or operations) with accountability for LSS projects. This is important from a validation perspective. Unfortunately, when continuous improvement is responsible to only one function, the needs of that function tend to dominate and drive all project selection. For example, when finance is accountable for all LSS projects, priority is given to projects that result in cost reduction even when this is not the most important issue on the strategic plan. Likewise, when the leadership of the continuous improvement program is responsible for project selection, projects tend to be aligned to GB and BB training.
Narrowly governed programs generally have a oneyear to fiveyear lifespan within the organization and are then supplanted by other programs focused on the needs of the functional areas not driving the LSS agenda. Programs aligned with the strategic plan and governed by a scorecard approach in which the needs of all functional areas are represented tend to be sustainable in the long term and create business impact. To do this, however, the organization must actively plan the execution of LSS, not just run projects.
What follows is a simple process and set of proven tools that have been used in several industries to help senior leadership teams to:
1. Establish a set of goals and objectives for the business to achieve. Most companies do this as part of their employee performance and business planning cycles. These same goals must be the goals of the LSS program. When the LSS program pursues goals not directly related to how the business runs (particularly if its employees are incentivized toward these other goals), the program becomes an addon activity. Relegating LSS to an additional activity within the employees’ responsibilities ensures that continuous improvement will always be a lower priority than the “real” goals for which they are paid. The role of LSS should be to ensure delivery of those real goals; additional goals serve only to distract the team from the strategic agenda. [Tools for this step include: employee goals and weighting, annual business plan, strategic business plan, and mission statements.]
2. Identify the needs of the business and core competencies. When looking at the goals of a company, there are some goals that will be easily achieved, and for which the activities required to achieve them are wellknown, while there are other goals for which the path to success is less clear. By focusing the LSS activities on the business goals that are at risk, efforts are aligned to the areas where the greatest effects will be felt. Furthermore, since LSS is helping key leaders meet difficult goals, buyin for the continuous improvement program is generated. The goals of the business are the LSS goals. When the business goals are met, the organization is propelled forward and creates value. [Tools for this step include: quality function deployment (QFD), failure mode and effects analysis (FMEA), key process indicator (KPI) scorecards, and system capability scorecards.]
3. Make plans to close that gap. This is, of course, where most organizations want to start. Without the baseline data that shows where and to what extent the LSS projects will improve the real needs of the business, however, the process is likely to create an arbitrary list of project that may or may not support longterm goals. It is vital that these steps be undertaken only after the real needs of the business are understood.
4. Match team members to projects with problems they have an interest in solving. Many organizations assign their LSS leaders to projects regardless of those leaders’ backgrounds, education and experience. This model works okay if the company also dedicates those LSS resources (i.e., GBs and BBs) full time to improvement projects, but this is not reality for most GBs. Most GBs undertake improvement projects in addition to other responsibilities. Failure to keep this arrangement in mind when assigning teams frequently means people will choose their “day jobs” over project work resulting in project delays. In an ideal case, GBs who work in nonLSS functions should be assigned to projects that directly affect that nonLSS function. This minimizes the conflict of interest between project work and day jobs. [Tools for this step include: skills matrix, organizational charts and training of new LSS practitioners.]
5. Plan project execution. It is a mistake to identify projects and just hand them off to project teams. The success of an effective project portfolio rests on creating a steady, predictable stream of results. To achieve this, projects must be actively sequenced, resourced and managed. Nothing should be left to chance.
6. Execute the plan. This is the point where all the work in planning pays off. If planning has been done properly, then this stage is simply managing to the plan and dealing with any deviations. The key to success is to make the work visible.
When all this is done correctly (see “Critical Outputs from Good LSS Portfolio Planning” sidebar), there will be a clear vision of how a company’s continuous improvement efforts will drive success and project teams will have a clear path forward.
]]>Mathematically, total variance is equivalent to the sum of true variance and the measurement system error. Measurement system error should be zero but, practically speaking, this is not often the case because of factors such as worn and noncalibrated gauges, inconsistency of an appraiser, and different knowledge levels of the appraisers. In other terms, total variation should arise due to the difference in the parts being measured. It is important to maintain a measurement system error as low as possible.
Variance (total) = variance (true) + variance (measurement error)
To consider a measurement system as adequate, there are set rules based on the data type being used. For continuous data, 1) gage R&R has to be within 10 percent (10 percent to 30 percent allowed if the process is not critical) of the total study variation, and 2) the number of distinct categories has to be greater than four. (For discrete data where attribute agreement analysis is used, kappa value has to be at least 0.7 for nominal and ordinal data, and Kendall’s correlation coefficient [with a known standard] has to be at least 0.9 for ordinal data.)
The process of conducting MSA study for continuous and discrete data is similar. Take 10 to 20 samples for a study, provide them to two or three appraisers for the first trial, and then rerun the study. The main difference lies in the fact that the appraisers use a gauge to measure the part in continuous data. For discrete data, however, it is left to the knowledge of the appraisers whether the transaction is defective.
One common challenge faced in an MSA study of discrete data is regarding the two trials. How can the bias be removed when appraisers are given the same samples for the two trials through an email? When provided the same sample twice at the same time, the appraisers will surely provide the experimenter the same results for Trials 1 and 2; thus, no repeatability issues will be detected when the study is done in this manner. Additionally, if the two appraisers are aware of the study being run, then the reproducibility component results will be biased. The following example highlights such a mistake being made during an MSA study.
Example: Compliance Project in Banking
A project leader at a financial institution was asked to do an MSA study to confirm that the measurement system was adequate. He ran the study for a week, put 10 samples in a spreadsheet and sent them to the two appraisers. The study was completed and the data was shared with the Black Belt (BB). The BB completed the study in a statistical analysis program and found that there was no issue in repeatability. There were, however, some mismatches between the two appraisers. Curious, the BB asked the project leader how the study was conducted.
The project leader explained that he documented 10 samples in a spreadsheet and sent them to the two appraisers through separate emails. For the second trial, the project leader again sent the 10 samples in a spreadsheet via email. The BB told the project leader that while the project leader ensured that the two appraisers did not know that the study was being conducted by two different individuals, there was a repeatability bias involved in the process. The BB suggested that the project leader instead follow the following procedure to ensure that there would be no repeatability or reproducibility bias involved in the study.
The project leader took a new 10 samples and provided them to the SMEs following the new documented method. This time there were differences within appraisers, but the kappa value was within the permissible limit. By using this process, the repeatability bias was removed and the true measurement system error was determined.
Another common challenge is frequently observed when an MSA study is done for a set of continuous data. How should a sample be selected when the manufacturing process happens on a number of machines that results in varying product sizes? Can that influence the MSA study?
Example: Multiple Machines in Manufacturing
A supervisor was conducting a MSA study for the thickness parameter of a grinding wheel. She had parts produced from different presses, which used to come in sizes varying from 5 mm to 200 mm in thickness (categorized into large, medium and small thickness wheels). The supervisor thought that one study of 10 samples done with two appraisers would be good enough for the study.
She met with the Six Sigma expert in the organization and asked if she was using the right approach to conduct the study. The Six Sigma expert asked her how she would ensure that no measurement error was introduced (taking linearity into consideration). The expert recommended that the supervisor needed to ensure that the gauge is linear across the entire range of measurements (varying range of thicknesses).
The supervisor then took another set of 10 samples each for the small, medium and large thickness wheels to check the linearity of the gauges (the gage R&R). This way the supervisor ensured that both accuracy and precisionrelated measurement errors were correctly addressed during the study.
While conducting MSA studies, be aware of their practical challenges and how to remove them so as to avoid measurement errors.
]]>The first step to understand these two different processes is to understand what a model is. A model, whether it is a mathematical, simulation or physical model is a representation of a realworld process. The model can be used for studying, experimenting or making a prediction of the realworld event without directly observing or making change to the realworld process.
A model is created in order to understand relationships among independent variables or inputs (Xs) and the dependent variable or the outcome (Ys). Examples of mathematical models wellknown in the Lean Six Sigma (LSS) world are Little’s Law and other queuing models. Simulation models can be built using computer software. A physical model is not common to Lean applications but is frequently used for experimental purposes in engineering, architectural and science applications.
British statistician George E.P. Box said, “Essentially, all models are wrong, but some are useful,” which reminds the practitioner that neither is a model the realworld process nor can that process be fully represented. The question of how good a model can be is answered using verification and validation. The first pitfall that many LSS practitioners fall into is using the model that they created without both verifying and validating it. The second pitfall is that they go through one and assume that’s all that’s necessary. This leads to unrealistic prediction, misguided results and a loss of the integrity of the model.
Verification is the process that ensures that the model is producing or predicting the right outcomes based on the relationships of input variables and output variables that are built into the model. The verification process does not rely on, or compare to, the realworld process. Its purpose is to confirm that the model is doing exactly what the modeler “thinks” it should do when it was created. Basically, if it is desirable for the model to return a roundedup integer value of X_{1} divided by X_{2}, does the model always provide the integer result of 1 when X_{1} = 3 and X_{2} = 4 is entered? Or does it return a result of 0.75?
Validation is the process to ensure that the model is representing the real world as much as possible. The validation process helps a modeler be certain the correct model is built. The validation process relies heavily on the data collected from the real world, and the perception and understanding of the process of the modeler. The validation process ensures that the model is doing what the real process is doing. (See Figure 1.)
Consider a modeler building a model to represent a queuing system at an ice cream stand. He observes an arrival profile of customers and the service rate of the server. He finds that the server serves each customer at a constant rate of three minutes per customer. He builds a model to predict the waiting time (W) when a customer arrives at the stand and finds that there are customers (X) waiting in the system. He decides to use a mathematical model of W = 3X.
The modeler verifies that he built the model correctly by entering X = 1, 2, 5, 10 and 20 into his equation; the model returns the values of W as 3, 6, 15, 30 and 60 minutes respectively. In this verification process, the model calculates the result correctly based on the modeler’s perception of the linear relationship between W and X.
To validate this model, the modeler would conduct a time study when a customer, Jessica, arrives at the stand. For five different instances, the modeler observes there are 1, 2, 5, 10 and 20 customers in the line. The real system may return different waiting times for Jessica since some customers that are already in the line may decide to leave when the waiting time exceeds their tolerance limits. As a result, Jessica’s actual waiting time becomes shorter and thus does not consistently follow the linear relationship of W = 3X. In this case, even though the model passed the verification process, it does not represent behavior of the real system and fails the validation process.
Why are both verification and validation of a model needed? Consider another example of a process creating a simulation model for a distribution center consisting of four productsorting machines. In each step, a machine sorts product to its destinations. Figure 2 shows the schematic of the distribution center.
A LSS team collects data on cycle time and processing step at each machine. After that, the team builds a model using simulation software. Based on the data that was collected and statistically analyzed, the team found that the processing time of Machine A is normally distributed with a mean of 5 minutes and a standard deviation of 1 minute, Machine B has a constant processing time of 1.5 minutes and Machine C has a constant processing time of ten minutes. Products B and C arrive with equal distribution at Machine A every 5 minutes.
After the model was created, the team ran the model until reaching a steady state and found that there is an excessive queue in front of Machine B, but none in front of Machine C. Based on the assumption of the processing time at these three machines, and the arrival profile of products B and C, the team realizes that there could be an error in the model code or parameters. The team ensures that all parameters have been entered correctly, including breaks and lunch times, processing time and distribution types, staffing and time available in a day.
Eventually, the team found a mistake in the processing time parameter at Machine B – 15 minutes was entered instead of 1.5 minutes. This errorchecking process is a verification process. By ensuring that the model is producing what it should be producing, the modeler verifies that the model is error free. Based on the assumption that Machine B sorts products faster than Machine A, there should not be any physical queue in front of it. Without a proper verification, this model would have led to misguided results.
Consider the same distribution center and a corrected model. The team decides to use the model to predict the behavior of the process during a peak demand period. What is the best way to validate the model and ensure the model acts as close to the real process as possible? For an existing process for which the data is available, the process is simple. The team may use data from the previous peak period (such as work in progress, queue length and queue time from the last known period). They can use the known data as input variables and compare the results of output variables to the last known data collected to adjust the model. This way the team can ensure that the model acts similarly to the realworld process. Validating the model is not as easy when the process did not previously exist or data is not available. The team can only assume the most likely behavior of the process based on the relationships between input and output variables.
The validation process should be performed after the verification process has been completed. The validation process normally involves real data, which can consume more of a team’s resources than the verification process. The table below suggests some validation methods for each modeling scenario.
Modeling Scenarios With Corresponding Validation Methods  
Modeling Scenario  Validation Method 
Model of existing process, data is available  Test the model in several different cases during the normal and extreme periods using last known data and compare the model outputs to the last known outcomes 
Model of existing process, data is NOT available  Observe behavior of realworld process and compare that to the behavior of the model 
Model of nonexisting process, relationships of variables are known  Use correlation analysis to analyze the relationship between the outcome of the model and the input variables. Compare that to the known relationship of the variables. 
There is no one verification or validation process that fit all scenarios. A modeler should be aware of the available methods. Both verification and validation processes should be completed at the earliest stage in the project – and as thoroughly as possible. The key question for verification is whether the model was built correctly. After verification, the model should be error free. The key question for validation, on the other hand, is whether the correct model was built. After validation, it should be clear that the model acts similar to the realworld process so a team can be confident in using it to predict the behaviors of a process.
]]>But, permutations and combinations cause a lot of confusion: “Which one is which?” and “Which one do I use?” are common questions.
If I purchase a salad for lunch, it may be a mix of lettuce, tomatoes, carrots and radishes. I don’t really care what order the vegetables are when they are placed in the bowl. All that I care about is that I have a salad that contains lettuce, tomatoes, carrots and radishes. The salad could consist of “carrots, tomatoes, radishes and lettuce” or “radishes, tomatoes, carrots and lettuce.” It’s still the same salad to me.
How about the PIN for my bank account? “The PIN to my account is 8910.” If I want to access my bank account through the ATM, I do need to care about the order of those numbers. “1098” would not access my account. Neither would “9108.” It has to be exactly 8910. The order is important.
Details matter for permutations – every little detail. To a permutation, red/yellow/green is different from green/yellow/red. Order is important and absolutely must be preserved.
Combinations are much easier to get along with – details don’t matter so much. To a combination, red/yellow/green looks the same as green/yellow/red.
Permutations are for lists (where order matters) and combinations are for groups (where order doesn’t matter). In other words: A permutation is an ordered combination.
Note: A “combination” lock should really be called a “permutation” lock because the order that you put the numbers in matters. A true “combination” lock would open using either 101723 or 231710. Actually, any combination of 10, 17 and 23 would open a true “combination” lock.
Permutations are all possible ways of arranging the elements of a set. We’re going to be concerned about every last detail, including the order of each item. Permutations see differently ordered arrangements as different answers.
Let’s say that we have five people in a barbecue contest: Andy, Bob, Charlie, David and Eric.
How many ways can we award the first, second and third place ribbons (blue, red and yellow) among the 5 contestants?
Since the order in which ribbons are awarded is important, we need to use permutations.
Here’s a breakdown:
For this example, we picked certain people to win, but that doesn’t really matter. All that matters is that we understand that we had five choices at first, then four and then three. The total number of options was 5 × 4 × 3 = 60. We had to order three people out of five. To do this, we started with all five options then took them away one at a time (four, then three, etc.) until we ran out of ribbons.
Fivefactorial (written 5!) is: 5! = 5 × 4 × 3 × 2 × 1 = 120.
But 120 is too big! It would work if we had five ribbons. But we don’t; we have three ribbons. We only want 5 × 4 × 3 (the total number of options). How do we get the factorial to “stop” at 3? We need to get rid of the 2 × 1. What do we call 2 × 1? 2factorial! This is what is left over after we pick three winners from five contestants.
If we divide 5! by 2!, we get: 5! / 2! = (5 × 4 × 3 × 2 × 1) / (2 × 1) = 5 × 4 × 3 = 60 (because the 2 × 1 in the numerator and denominator will cancel each other out).
A better (simpler) way to write this would be: 5! / (5 – 3)!
This is saying, “use the first three numbers of 5!”
In more general terms, if we have n items total and want to pick k in a certain order, we get:
n! / (n – k)!
And this is the permutation formula: The number of ways k items can be ordered from n items:
P(n,k) = n! / (n – k)!
We all have a relative that is laid back. Nothing bothers them. They just go about their lives as if nothing really matters.
Combinations are the happygolucky cousins of permutations. Order doesn’t matter to them. You can mix the order up, and they’re still happy. The world looks the same to them whether it’s ordered or not ordered.
Let’s say that instead of awarding blue, red and yellow ribbons for our barbecue contest, we award the top three with “participant” ribbons. How many ways can I award three participant ribbons to five people?
In this case, the order doesn’t matter. If I give a participant ribbon to Andy, Bob and Charlie, it’s the same as giving a participant ribbon to Charlie, Andy and Bob. Either way, they’re all going to be equally disappointed.
Wait a minute! Andy/Bob/Charlie = Charlie/Bob/Andy. We’ve got some repeats here. Ignore that for a moment, let’s just figure out how many ways we can rearrange three people.
We have three choices for the first person, two for the second and only one for the last. So we have 3 * 2 * 1 ways to rearrange three people.
But this looks like a permutation – it is! If you have N people and you want to know how many arrangements there are for all of them, it’s just Nfactorial or N!
So, if we have three participant ribbons to give away, there are 3! equals six variations for every choice we pick. If we want to figure out how many combinations we have, we create all of the permutations and divide by all of the redundancies. In our case, we get 336 permutations (8 x 7 x 6), and we divide by the six redundancies for each permutation and get 336/6 = 56.
Therefore, the general formula for a combination is:
C(n,k) = P(n,k) / k!
This means: “Find all the ways to pick k people from n, and divide by the k! variants.” Writing this out, we get our combination formula, or the number of ways to combine k items from a set of n:
C(n,k) = n! / (nk)! × k!
It’s more important to understand why permutations and combinations work than it is to memorize the formulas. You can always look up the formulas if you forget them.
Just remember: