Control charts have two general uses in an improvement project. The most common application is as a tool to monitor process stability and control. A less common, although some might argue more powerful, use of control charts is as an analysis tool. The descriptions below provide an overview of the different types of control charts to help practitioners identify the best chart for any monitoring situation, followed by a description of the method for using control charts for analysis.
When a process is stable and in control, it displays common cause variation, variation that is inherent to the process. A process is in control when based on past experience it can be predicted how the process will vary (within limits) in the future. If the process is unstable, the process displays special cause variation, nonrandom variation from external factors.
Control charts are simple, robust tools for understanding process variability.
Processes fall into one of four states: 1) the ideal, 2) the threshold, 3) the brink of chaos and 4) the state of chaos (Figure 1).^{3}
When a process operates in the ideal state, that process is in statistical control and produces 100 percent conformance. This process has proven stability and target performance over time. This process is predictable and its output meets customer expectations.
A process that is in the threshold state is characterized by being in statistical control but still producing the occasional nonconformance. This type of process will produce a constant level of nonconformances and exhibits low capability. Although predictable, this process does not consistently meet customer needs.
The brink of chaos state reflects a process that is not in statistical control, but also is not producing defects. In other words, the process is unpredictable, but the outputs of the process still meet customer requirements. The lack of defects leads to a false sense of security, however, as such a process can produce nonconformances at any moment. It is only a matter of time.
The fourth process state is the state of chaos. Here, the process is not in statistical control and produces unpredictable levels of nonconformance.
Every process falls into one of these states at any given time, but will not remain in that state. All processes will migrate toward the state of chaos. Companies typically begin some type of improvement effort when a process reaches the state of chaos (although arguably they would be better served to initiate improvement plans at the brink of chaos or threshold state). Control charts are robust and effective tools to use as part of the strategy used to detect this natural process degradation (Figure 2).^{3}
There are three main elements of a control chart as shown in Figure 3.
Control limits (CLs) ensure time is not wasted looking for unnecessary trouble – the goal of any process improvement practitioner should be to only take action when warranted. Control limits are calculated by:
Mathematically, the calculation of control limits looks like:
(Note: The hat over the sigma symbol indicates that this is an estimate of standard deviation, not the true population standard deviation.)
Because control limits are calculated from process data, they are independent of customer expectations or specification limits.
Control rules take advantage of the normal curve in which 68.26 percent of all data is within plus or minus one standard deviation from the average, 95.44 percent of all data is within plus or minus two standard deviations from the average, and 99.73 percent of data will be within plus or minus three standard deviations from the average. As such, data should be normally distributed (or transformed) when using control charts, or the chart may signal an unexpectedly high rate of false alarms.
Controlled variation is characterized by a stable and consistent pattern of variation over time, and is associated with common causes. A process operating with controlled variation has an outcome that is predictable within the bounds of the control limits.
Uncontrolled variation is characterized by variation that changes over time and is associated with special causes. The outcomes of this process are unpredictable; a customer may be satisfied or unsatisfied given this unpredictability.
Please note: process control and process capability are two different things. A process should be stable and in control before process capability is assessed.
Individuals and Moving Range Chart
The individuals and moving range (IMR) chart is one of the most commonly used control charts for continuous data; it is applicable when one data point is collected at each point in time. The IMR control chart is actually two charts used in tandem (Figure 7). Together they monitor the process average as well as process variation. With xaxes that are time based, the chart shows a history of the process.
The I chart is used to detect trends and shifts in the data, and thus in the process. The individuals chart must have the data timeordered; that is, the data must be entered in the sequence in which it was generated. If data is not correctly tracked, trends or shifts in the process may not be detected and may be incorrectly attributed to random (common cause) variation. There are advanced control chart analysis techniques that forego the detection of shifts and trends, but before applying these advanced methods, the data should be plotted and analyzed in time sequence.
The MR chart shows shortterm variability in a process – an assessment of the stability of process variation. The moving range is the difference between consecutive observations. It is expected that the difference between consecutive points is predictable. Points outside the control limits indicate instability. If there are any out of control points, the special causes must be eliminated.
Once the effect of any outofcontrol points is removed from the MR chart, look at the I chart. Be sure to remove the point by correcting the process – not by simply erasing the data point.
The IMR chart is best used when:
Another commonly used control chart for continuous data is the Xbar and range (XbarR) chart (Figure 8). Like the IMR chart, it is comprised of two charts used in tandem. The XbarR chart is used when you can rationally collect measurements in subgroups of between two and 10 observations. Each subgroup is a snapshot of the process at a given point in time. The chart’s xaxes are time based, so that the chart shows a history of the process. For this reason, it is important that the data is in timeorder.
The Xbar chart is used to evaluate consistency of process averages by plotting the average of each subgroup. It is efficient at detecting relatively large shifts (typically plus or minus 1.5 ? or larger) in the process average.
The R chart, on the other hand, plot the ranges of each subgroup. The R chart is used to evaluate the consistency of process variation. Look at the R chart first; if the R chart is out of control, then the control limits on the Xbar chart are meaningless.
Table 1 shows the formulas for calculating control limits. Many software packages do these calculations without much user effort. (Note: For an IMR chart, use a sample size, n, of 2.) Notice that the control limits are a function of the average range (Rbar). This is the technical reason why the R chart needs to be in control before further analysis. If the range is unstable, the control limits will be inflated, which could cause an errant analysis and subsequent work in the wrong area of the process.
Table 2: Constants for Calculating Control Limits  
n (Sample Size)  d_{2}  D_{3}  D_{4} 
2  1.128  –  3.268 
3  1.693  –  2.574 
4  2.059  –  2.282 
5  2.326  –  2.114 
6  2.534  –  2.004 
7  2.704  0.076  1.924 
8  2.847  0.136  1.864 
9  2.970  0.184  1.816 
10  3.078  0.223  1.777 
11  3.173  0.256  1.744 
12  3.258  0.283  1.717 
13  3.336  0.307  1.693 
14  3.407  0.328  1.672 
15  3.472  0.347  1.653 
Can these constants be calculated? Yes, based on d_{2}, where d_{2} is a control chart constant that depends on subgroup size.
The IMR and XbarR charts use the relationship of Rbar/d_{2} as the estimate for standard deviation. For sample sizes less than 10, that estimate is more accurate than the sum of squares estimate. The constant, d_{2}, is dependent on sample size. For this reason most software packages automatically change from XbarR to XbarS charts around sample sizes of 10. The difference between these two charts is simply the estimate of standard deviation.
cChart
Used when identifying the total count of defects per unit (c) that occurred during the sampling period, the cchart allows the practitioner to assign each sample more than one defect. This chart is used when the number of samples of each sampling period is essentially the same.
uChart
Similar to a cchart, the uchart is used to track the total count of defects per unit (u) that occur during the sampling period and can track a sample having more than one defect. However, unlike a cchart, a uchart is used when the number of samples of each sampling period may vary significantly.
npChart
Use an npchart when identifying the total count of defective units (the unit may have one or more defects) with a constant sampling size.
pChart
Used when each unit can be considered pass or fail – no matter the number of defects – a pchart shows the number of tracked failures (np) divided by the number of total units (n).
Notice that no discrete control charts have corresponding range charts as with the variable charts. The standard deviation is estimated from the parameter itself (p, u or c); therefore, a range is not required.
Although this article describes a plethora of control charts, there are simple questions a practitioner can ask to find the appropriate chart for any given use. Figure 13 walks through these questions and directs the user to the appropriate chart.
A number of points may be taken into consideration when identifying the type of control chart to use, such as:
Subgrouping is the method for using control charts as an analysis tool. The concept of subgrouping is one of the most important components of the control chart method. The technique organizes data from the process to show the greatest similarity among the data in each subgroup and the greatest difference among the data in different subgroups.
The aim of subgrouping is to include only common causes of variation within subgroups and to have all special causes of variation occur among subgroups. When the withingroup and betweengroup variation is understood, the number of potential variables – that is, the number of potential sources of unacceptable variation – is reduced considerably, and where to expend improvement efforts can more easily be determined.
Withinsubgroup Variation
For each subgroup, the within variation is represented by the range.
The R chart displays change in the within subgroup dispersion of the process and answers the question: Is the variation within subgroups consistent? If the range chart is out of control, the system is not stable. It tells you that you need to look for the source of the instability, such as poor measurement repeatability. Analytically it is important because the control limits in the X chart are a function of Rbar. If the range chart is out of control then Rbar is inflated as are the control limit. This could increase the likelihood of calling between subgroup variation within subgroup variation and send you off working on the wrong area.
Within variation is consistent when the R chart – and thus the process it represents – is in control. The R chart must be in control to draw the Xbar chart.
Between Subgroup Variation
Betweensubgroup variation is represented by the difference in subgroup averages.
Xbar Chart, Take Two
The Xbar chart shows any changes in the average value of the process and answers the question: Is the variation between the averages of the subgroups more than the variation within the subgroup?
If the Xbar chart is in control, the variation “between” is lower than the variation “within.” If the Xbar chart is not in control, the variation “between” is greater than the variation “within.”
This is close to being a graphical analysis of variance (ANOVA). The between and within analyses provide a helpful graphical representation while also providing the ability to assess stability that ANOVA lacks. Using this analysis along with ANOVA is a powerful combination.
Knowing which control chart to use in a given situation will assure accurate monitoring of process stability. It will eliminate erroneous results and wasted effort, focusing attention on the true opportunities for meaningful improvement.


Comments
Carl, it was great…
Carl,
This was a nice summary of control chart construction. Just wanted to share a couple of my thoughts that I end having to emphasize when introducing SPC.
1) The four points mentioned for the use of the ImR chart (natural subgroup size is unknown, integrity of the data prevents a clear picture of a logical subgroup, data is scarce, natural subgroup needing to be assessed is not yet defined) do not limit its use to continuous data. Yes, when the conditions for discrete data are present, the discrete charts are preferred. When the conditions are not met, the ImR will handle the load, so I am a fan of “or ImR” at the end of each selection path for the discrete charts.
2) I agree the control limits for the Averages (might) be inflated if a Range is out of the control, but if there are still signals on the Average chart, then those signals will be even greater if the limits were not inflated. Even with a Range out of control, the Average chart can and should be plotted with actions to investigate the out of control Ranges.
3) Fortunately Shewhart did the math for us and we can refer to A2 (3/d2) rather than x+3(Rbar/d2).
4) Understanding “Area of Opportunity” for the defect to occur is as important as understanding sample size.
Thanks again. Great article.
Pete
Completly agree.
Thank you
Hi Carl,
compliments! A great contribution to clarify some basic concepts in Control Charts.
Good summary.
Four comments.
A. Regarding your statements: “Control rules take advantage of the normal curve in which 68.26 percent of all data is within plus or minus one standard deviation from the average, 95.44 percent of all data is within plus or minus two standard deviations from the average, and 99.73 percent of data will be within plus or minus three standard deviations from the average. As such, data should be normally distributed (or transformed) when using control charts, or the chart may signal an unexpectedly high rate of false alarms.”
Just as you were specific in describing several aspects of control charting and distinguishing between the different types, you should be specific about which charts “use” the normal distribution and which don’t.
First, the limits for attribute control charts are based on discrete probability distributions–which, you know, cannot be normal (it is continuous). Thus, no attribute control chart depends on normality.
Second, the range and standard deviations do not follow a normal distribution but the constants are based on the observations coming from a normal distribution. Your statement could apply to the MR, R, and Scharts. There is evidence of the robustness (as you say) of these charts.
Third, the Xbar chart easily relies on the central limit theorem without transformation to be approximately normal for many distributions of the observations.
Fourth, even for the Ichart, for many roughly symmetrical or unimodal distributions, the limits are rather robust–as you said.
B. For sample sizes less than 10, that estimate is more accurate than the sum of squares estimate. The d2 factor removes the bias of Rbar conversion as does the c4 factor when using the Schart, so both are unbiased (if that is what you meant by accurate). On the other hand, R/d2 has more variation than s/c4.
I would use the Rchart over the Schart regardless of the subgroup size–except possibly if the charts are constructed manually. The reason is that the Rchart is less efficient (less powerful) than the Schart. In addition, as you indicated, the limits are constructed by converting Rbar into an estimate of the standard deviation by dividing by d2. Why estimate it indirectly–especially if software is doing the calculations?
C. A central line (X) is added as a visual reference for detecting shifts or trends this is also referred to as the process location.
As with my point (A), this statement depends on the control chart. For the I and Xbarcharts, the center line is the process location. For all other charts, it is not (or, I am misunderstanding what you mean by process location.)
A better way of understanding the center line on the chart is to recognize that each type of chart monitors a statistic of a subgroup: Xbar monitors averages, R monitors ranges, S monitors standard deviations, c monitors counts, etc. The center line is the average of this statistic across all subgroups.
Now it should be clearer that, for example, the center line of the Rchart cannot be the process locationit is the average range. Similarly, for the S, MR, and all the attribute charts.
D. 1. Estimating the standard deviation, ?, of the sample data
2. Multiplying that number by three
3. Adding (3 x ? to the average) for the UCL and subtracting (3 x ? from the average) for the LCL
Mathematically, the calculation of control limits looks like: CL = average ± 3*?hat”
Again, to be clearer, the average in this formula (if applied generically to all control charts) is the average of the statistic that is plotted on the chart. It could be the average of means, the average of ranges, average of counts, etc. The ? that is used on the control limits is not an estimate of the population standard deviation. It is the standard error of the statistic that is plotted. That is, it is the standard deviation of averages in the Xbarchart, the standard deviation of counts in the cchart, the standard deviation of standard deviations in the Schart, and so on.
There is a specific way to get this ?. Because of the lack of clarity in the formula, manual construction of charts is often done incorrectly. This is why it is recommended that you use software.
Just a couple of things:
1. The ImR (or XmR) chart is usually actually usually a BETTER chart to use for discrete data; the reason is that the limits for the discrete charts are not very robust to violations of the assumptions for the binomial or poisson distributions. The XmR actually uses the empirical pointtopoint variation to derive limits.
2. Once you get to a subgroup size of 810, the limits based on subgroup ranges are less precise than limits based on subgroup standard deviations. This is because of the limitations caused by the lack of information inherent in the ranges. This limitation has little impact when you have small subgroups, because the loss of information is less of a factor. Until you get to size 10, though, the differences between limits for rbar and sbar charts are too small to be of practical significance. Using sbar, by the way, still requires the use of a bias correction factor, c4. The average of subgroup standard deviations divided by c4, by the way, is still just an estimate of the process variation. It’s no more “real” than the average of subgroup ranges divided by d2.
Read Wheeler’s “Understanding Statistical Process Control” for more, or “Advanced Topics in Statistical Control” for a more detailed explanation.
Control charts are very robust and not sensitive to nonnormality. The percentage of data falling within + 1 sigma is impacted the most. But, at + 3 sigma the impact is much less. Wheeler discusses this in his book referenced. The original manuscript was 22 pages but had to be edited down for publication. Perhaps someone (even me) could do a follow on article in the future. Control charts are so very useful well beyond traditional SPC and thanks to software so very easy to construct.
Figure 1 was interesting.
Let’s also not forget to remind people to react to Out of Control indications immediately.
IMO no one should be using Rbar/d2 these days.
And if they do, think about what the subgrouping assumptions really are.
d2 for sample size of 2 is near 1, while for 9 is near 3.
But what if those samples are correlated, not independent?
Then you limits can be off by 2 or 3 x.
Where is the discussion of correlated subgroup samples and autocorreleated averages for Xbar charts?
If you are ASQ member, check JQT article by Woodall around 2000, with comments from all the gurus, on Issues with SPC. Montgomery deals with many of the issues in his textbook on SPC. But don’t wait to plot the dots and trend the data, just do not assume that the simple textbook methods for setting limits (and rules) are valid for your data source.
I think we need to motivate the appropriate use of SPC charts beyond “monitoring” and “analysis.” To me, the use of SPC charts, first and foremost, is to continually *improve* processes – over time. To successfully do that, we must, with high confidence, distinguish between Common Cause and Special Cause variation. We must do *that* because the *actions* we take to deal with each *are different* – and if we confuse the two we make the process’s performance worse.
SPC helps us make good decisions in our continual improvement efforts.
Wayne,
If I understand you correctly, I think we are in agreement. We as practitioners need to understand what is include in our subgroups from a process point of view, so that we can practically discern process variation included in between and within. Control charts are powerful tools beyond SPC. It is all about improvement not just making pretty graphs.
To Wayne G. Fischer.
Isn’t an Out of Control indication by definition a special cause? How would you separate a special cause from the potential common cause variation indicated by the statistical uncertainty? I find your comment confusing and difficult to do practically.
I disagree with the assertion that;
“Control rules take advantage of the normal curve in which 68.26 percent of all data is within plus or minus one standard deviation from the average, 95.44 percent of all data is within plus or minus two standard deviations from the average, and 99.73 percent of data will be within plus or minus three standard deviations from the average. As such, data should be normally distributed (or transformed) when using control charts, or the chart may signal an unexpectedly high rate of false alarms.”
As Understanding Statistical Process Control, by Wheeler and Chambers is used as a reference by the author, it is worth noting that this same text makes it clear that:
“Myth One: it has been said that the data must be normally distributed before they can be placed on the control chart.”
“Myth Two: It has been said the control charts works because of the central limit theorem.”
The last thing anyone should do when using control charts is testing for normality or transforming the data. These are robust tools for describing real world behavior, not exercises in calculating probabilities. Why remove the very things you are looking for?
Control charts are very robust and not sensitive to nonnormality. The percentage of data falling within + 1 sigma is impacted the most. But, at + 3 sigma the impact is much less. Wheeler discusses this in his book referenced.
Lloyd Provost and Sandy Murray discuss this idea in their book The Health Care Data Guide:
“Shewhart charts are very robust to a variety of distributions of data. Some authors have mistakenly assumed that a Shewhart chart is based on a normal distribution and suggest testing data for fit to a normal distribution prior to developing a Shewhart chart. As discussed in Chapter Four, this approach is the opposite of Dr. Shewharts intention to first establish the stability of data using the Shewhart Chart, and then consider statistical inferences of interest (such as a capability analysis as described in Chapter Five). So the complexity in this chapter of introducing transformations is not to achieve a normal distribution. But sometimes a transformation is useful when Shewharts methods lead to a chart that is not helpful for learning.”
This problem is especially true with the I chart where data are severely skewed and the lower limit does not exist. Cliff
Reference: Murray, Sandra; Provost, Lloyd P (20110826). The Health Care Data Guide: Learning from Data for Improvement (Kindle Locations 65926597). John Wiley and Sons. Kindle Edition.
To Chris Seider,
Hope the answer lies in broader interpretation of SPC charts that`s beyond control charts. To check special cause presence, Run chart would always be referred. Run chart will indicate special cause existence by way of Trend , osciallation, mixture and cluster (indicated by p value) in the data.Once run chart confirms process stability ,control charts may be leveraged to spot random cause variations and take necessary control measures.
To Chris Seider,
Seems i`m not quite right in saying that control charts would just be meant to monitor common cause of variation. While Run chart will definitely highlight process stability (and special cause existence if any), but even control charts can help distinguish between common cause and special cause varaition.There`re rules suggested by “western electric ” and walter shewhart to distinguish between the two causes of variation.Some of them to identify special causes are like1) any point out of control limits,ii) Nine points in a row in Mean+/ 1sigma or beyond (All on one side.),iii) Six points in a row, all increasing or decreasing,iv) Two out of three points in a row in Mean+/1 sigma or beyond to name a few and the larger list is anyway there in tools like minitab.Apology for inconvenience.
Can the IMR chart be used to determine an OutofTrend of stability test result data during the course of a drug product shelf life?
Kindly appreciate your help on this topic.
Yes, Imr charts can be used to detect trends. You would have to consider sampling strategy and measurement system capabilities.
Hi,
Thanks for a great post!
Could you please provide advice on the following. Every week my team and I complete x number of tasks. Over time we would like to make improvements and increase the average number of completed tasks that we complete.
In most uses, a control chart seems to help to keep a consistent average. Is that true?
What kind of chart could we use to show a gradual increase in the average and also show the upper\lower control limits?
Thanks, Daryn
I would start with am I mr chart. Do not recalculate the control limits after you observe stability in the first 10 or more samples. Then use the Shewhart rules to tell if there is a trend.
Hi.
I have what feels like a stupid question, but thought I would ask anyway.
When using an IMR chart (in Minitab) to track process results each week in a service environment, the average moving range and therefore control limits on the Ichart are recalculated, meaning that historical results can fall in and out of conformance as the ranges decrease/increase based on new data points being added. This feels inappropriate as how can you argue that a process is in control if you keep “moving the goalposts”. I’m no statistician and so I apologise for insulting anybody if this is a terrible question to ask!
Thanks
Matt
You should not recalculate control limits with out a reason such as a process change.
I’m interested in tracking production data over time, with an 8 hour sample size. This is descrete data. Which control chart is correct?
dear sir,
i also learned x bar chart at my university.regarding to this we want to calculate UCL LCL .but i have some question about this.according the formula of using calculate the above figures,the a2 value is constant thing or not? (UCL=x barA2(R bar)
What are these constants? Can they be calculated? Yes, based on d2, where d2 is a control chart constant that depends on subgroup size. See below. D2 constant is the result of an infinite integral. Thankfully, tables out to sample size 1000 are published. L.H.C. Tippett was the first in 1925.
Send me your Email and I will reply with the formula. This site will not allow me to paste an equation.
this is great. It has really helped me understand this concept better.
I am working on Pchart. My LCL is showing as negative but no data falls below zero. How does that effect the mean?
Really nice summary!
I have a question about the control limits. Why do we use +/ 3 sigma as UCL/LCL to detect specialcausevariation when we know that the process mean may shift +/ 1,5 sigma over time? The limits in the control chart must be set when the process is in statistical control. However, the amount of data used for this may still be too small in order to account for natural shifts in mean. Why not use 4,5 sigma instead? It’s expensive to stop production.
Please refer to Shewhart’s original work; Economic control of quality of manufactured product, Statistical method from the viewpoint of quality control.
Dear Carl,
It is a good effort. I learned more about control charts.
I found small variation as follows:
As per the np chart statement: the unit may have one or more defects
As per flow chart “one defect per unit” is noted for np chart.
kindly clarify.
Also some practical examples will provide much more clarity in real use.
Thanks,
Sathish Rosario
sathishrosario@gmail.com
Very lucid explanation. Keep writing on such topics.
If all points in x and R chart lies within UCL and LCL limits ,can all parts be accepted or is there any defetive part present can 6sigma method be used to decide whether or not defective parts are present
Control limits of a process control chart have no relationship to specification limits.
If the process is stable and in control then process capability could be determine. Process capability is a different topic. It basically determine if and how reliably your process variation fits within your customers specification limits.
Hi Carl!
I wanna ask about np control chart for attribute data. There’s a point that lays below the LCL. Why the point is considered as “out of control”? Is not that the smaller defect number the better? Thank you.
Cheers,
Prameswari
Hi,
This summary helped me a lot but I have still have questions, If I’m working in an assembly with two stations
(A–>B) and I’m having defectives in station A but are still re workable and I can still proceed them to station B. Should I plot those defectives from station A in my pchart?
Thanks for any answers!
They have given just Number of errors and asked to calculate C chart.
Thank you for the good article. I have a question about when there is seasonality in the data, the trends are expected to happen and if fixed means and control limits for the entire time period are used, they will indicate false out of control alarms.
What is the best approach to build a control chart for this kind of data, can you please recommend a reference.
Thank you.
Very good!! Excellent write up.
I found difficulty in interpreting proportion of defect in this kind of data;
I have 10 subgroup, each subgroup has different sampel size. The object that is being inspect is chair and there are 4 observed component per chair. I have been told that control chart used in this case is p chart with proportion of each subgroup is total defective components/(number of chair*4). This is what I’m confused about, what defect proportion is that? Is it the proportion of defective chair or proportion of defective component?
The IMR and XbarR charts use the relationship of Rbar/d2 as the estimate for standard deviation. What do XbarS charts use to estimate standard deviation?. Can you please provide me the equation to calculate UCL and LCL for XbarS charts using d constants.
Thank you,
Wil
If the unbiasing constant is used (default), the formula is as follows:
UCL = c4 (ni)s + kc5 (ni)s
where:
c4 and c5 = values from a table
ni = size of the ith subgroup
k = the parameter that is specified for Test 1 of the tests for special causes, 1 point > K standard deviations from center line. By default, k =3.
s = the estimated standard deviation using sum of squares method
Dear Carl,
I am new here, your topics are really informative.I’ve been working in the quality for almost 10 years and want to pursue a career in Quality Engineering. I tried making a control chart but have doubt about it. Example: I have a KEY Diameter of 1.200 ±.001 and want to have a control chart for it. What could be the UCL and LCL?
Hi Jigs,
Think about your key diameter manufacture as merely a ‘process’.
Firstly you need to gather the experts to determine an appropriate sample size and frequency to take that is representative of the process.
You can then start plotting points of your process as a learning phase. As a rough guide, take 25 points during this phase. Once the 25 points are taken, you can calculate all of your control limits off that. For now, take no notice of the tolerance limits. They will be considered for capability after you have a stable process.
Once you have your limits, lock them in place, continue to run your process using these limits and look out of assignable causes. If an assignable cause arises, you need to understand why.
You’ll find that perhaps the process changes mean or spread, either by an improvement or something has changed that is detrimental to your process.
To recalculate limits, you must know the answer to these four questions:
1. Is there evidence of assignable cause?
2. Is the cause known?
3. Is the change in process improvement desirable?
4. Is the change sustained?
I hope this helps.
What is the rationale for selecting this six points for trend and 8 for shift is there any reason behind this tests.
Likelihood of that happening. You can assure yourself assuming a normal or near normal distribution and using binary probability.
Can you help me with this question? How to solve it?
Company X produces a lot of boxes of Caramel candies and other assorted sweets that are sampled each hour. To set control limits that 95.5% of the sample means, 30 boxes are randomly selected and weighed. The standard deviation of the overall production of boxes iis estimated, through analysis of old records, to be 4 ounces. The average mean of all samples taken is 15 ounces. Calculate control limits for an X – chart.
You are not describing a control chart. Control limits are + 3s (I am a purest). It seem that you are describing a bastardized chart that will cause your mistake common cause for special cause. This will lead tampering, adjusting without special cause, and increasing variation. Look up Deming’s tampering rules.
What you might really want is a mean with 95.5% confidence interval. This requires the assumption of normality.
i wanna ask this question please explain me
if all values of x bar are close to central line and none are near 3 sigma limits .in fact, when you draw one sigma limits all the points fall within narrow limits this is called hugging
would such a chart make you suspicious that something was wrong?
why?
what possible explanations occur to you that might account for an x bar chart of this type
That condition is unlikely to happen naturally. Something caused a change to the system behaviour.
Very concise and complete explanation. Thanks Carl.