Chip Hewette

Forum Replies Created

Viewing 49 posts - 1 through 49 (of 49 total)
• Author
Posts
• #83294

Chip Hewette
Participant

Thank you for the kind words.
No.
It is not always simple.  I felt it best for the original questioner to think about the customer view of a delivery.  Although 36 / (7200 * 8) sounds like a great performance, in my view any defect is a complete failure of one delivery.  There is only one opportunity to satisfy the customer, when there are many opportunities to make a mistake.
Perhaps my less-wordy response here will suffice to minimize the convolution.

0
#83289

Chip Hewette
Participant

One may interpret DPMO as defects divided by (events x opportunities) but this can understate the rate.
If one is making a delivery and (a)the product puts a hole in the owner’s wall on the way down the hall and (b)is dropped by the delivery team at the living room, how many defects exist?  There may be eight opportunities for a defect to exist on one delivery, but in this case you have lost (damaged) the product as well as damaging the customer’s home.
I suggest that you calculate the DPMO based on the boolean “OR” premise.  Any defect during a delivery means the delivery is defective! 36 defects divided by 7200 deliveries x 1 million would be the calculation if each defect occurred alone.  If you have multiple defects, the DPMO would diminish but not by the factor of the 8 opportunities per delivery.
I suggest that if multiple defects exist these are great candidates for a chi-square analysis by product, delivery team, region, or whatever, in which you compare the count of “really bad” deliveries by appropriate subgroups.  From the available data you don’t have enough to do serious “A”nalysis, but maybe after you calculate the baseline DPMO you can collect or use more historical data and see if there are reasons for such occurrences.

0
#83257

Chip Hewette
Participant

I assume the results are something like this:y1 = ax1 + bx2 + cx3 + 0*x4y2 = 0*x1 + bx2 + cx3 + 0*x4y3 = 0*x1 + 0*x2 + cx3 + dx4Please keep in mind that for a factor “x” to be significant does not automatically mean that it has a massive effect on the output.  If the measure y can be evaluated closely, and you have done enough replication, you may have some significant terms that have large coefficients and some that have small coefficients.  Evaluate the impact of the factors within each equation first.Now, consider interactive effects.  If you have two factors that are significant, can they interact?  Have you tested for interactive effects?  Perhaps you have a simple situation where factor a is only important when factor b is at a certain level.  Such interactions will go a long way in setting the better level or concentration of each factor.With additional info, you can evaluate the second and third y measurement equations.  Perhaps some of the combinations will be defined and your search for optima will be simplified.Often outside information is brought to bear on troubling questions.  Perhaps the cost of factor A is exorbitant?  Maybe it should be minimized, even though it affects measure y1, so that a cheaper factor b can be used to boost y1 to an acceptable level.

0
#83141

Chip Hewette
Participant

Can you provide us with more info?  What was the proportion of “fail” within the 20 samples?  Do you have adequate inference space to state that the inspectors had opportunity to find “pass” and “fail” parts?
What is the likelihood of part failure in production?  Does its frequency of occurence mean the inspectors would likely be ‘bored’ and miss the random failure?  Or, do failures occur with some regularity?  How would you structure the measurement system study to consider this?
What variation are you most concerned about?  Due to production requirements do you have many inspectors?  If so, do you fear that one of the nine might have a different interpretation of the true part condition?  If so, keep the AR&R large, and involve all nine inspectors.
If the part is so difficult to inspect that one inspector might call it “pass” one time and “fail” another time, I don’t think two observations by each inspector will find such difficulties.

0
#83066

Chip Hewette
Participant

First, DPMO is merely mathematics, in my view.  Transitioning your company to a nonconformance reporting method of “nonconformances per million opportunities” is a way to keep the fire lit under folks who think that 99% is good.  I don’t understand the \$10K to calculate DPMO.
Second, this is ambitious.  I would hazard a guess that many processes exist that no one is aware of.  A good starting point are process maps for families of products.
Third, there are some breakthroughs in quality that can occur when processes are identified, mapped, aligned and controlled.  One piece flow is one way to prevent defects from accumulating.  If you can map the processes, streamline them, control them, and create one piece flow you may be heading towards your goal.

0
#83007

Chip Hewette
Participant

First, think through the 4,000 data points.  That’s a lot of info!  Chances are very high that you have many contributing causes to the measured response.  Assigning all 4,000 responses to one proposed factor is inappropriate in my view.
Your thought to simplify the data structure is mathematically suspect, as related by the earlier response to your question.  However, subdividing the data is indeed where I would go…except I would subdivide by additional elements of interest.  What other things are affecting the response?  Day of the week?  Outside temperature?  Shift of the factory?  Operator?
I suggest a team approach to identifying likely causes and a historical data exploration along those lines first.  Based on data type, your analyses could be multiple linear regression, ANOVA, chi-square, or logistic regression.
If there is any interest in performing the simple linear regression on the 4,000 data points one can simply take a random sample of that dataset and look at the scatter plot.  Take 10% of this dataset randomly and see what you see.  Chances are high that the results will be no different, but easier to display to your audience.

0
#83006

Chip Hewette
Participant

First, link your quest to external customer requirements.  Then, think through observed customer-described nonconformances such as customer returns…what reasons exist for customers to return?  Can you pareto these reasons?
Second, consider a fault tree.  Can you logically describe likely failures?
Third, consider a formal FMEA.  Will this identify likely failures?
These tools may expand your list of things that can go wrong.
Now, think through the number of opportunities, keeping in mind that some failures cannot occur downstream of earlier failures.  Don’t double count the opportunities.
I cannot see the logic of only counting things that must go right.  Mathematically this would understate the area of opportunity.  Logically it would seem that all activities must go right to make the assembly.

0
#82980

Chip Hewette
Participant

I would suggest a different approach to your training.  Begin with a brainstorming session using a Post-It note technique.  Ask people to write what they dislike or find ineffective about the way your company solves problems on Post-Its.  Spend about 8 to 10 minutes in writing.  As each person writes a note, have them hold it over their head.  Walk to that note, take it, and read it aloud.  Most of the time others will crumple up a Post-It they were writing as the thought was the same.  Collect all the Post-Its on a large surface, reading them as you collect.
Then, post five large Post-It flip charts on the walls, with the DMAIC information clearly visible.  Each chart should be labeled by the phase letter and simple statements about that phase.  Organize the Post-Its onto the phase of Six Sigma that eliminates that problem.  Talk through how DMAIC attacks all the issues.
In this way, you’ve engaged the audience and learned from them how ineffective efforts have been.  Their own complaints being solved by DMAIC will hopefully guide them to a reasonable conclusion, that Six Sigma can help us.
Issues you may hear include:

No clear direction from the top
Haven’t we worked on this before?
Can’t tell if we’ve solved the problem
Tried solutions but later the problem recurred
Couldn’t figure out what we are doing today anyway
Don’t know who is responsible to fix the problem
Don’t know how to keep the process in control
And on, and on…
Let your audience really paint a gloomy picture of the chaotic problem-solving efforts themselves.  Then, present DMAIC.

0
#82952

Chip Hewette
Participant

I am not confused.  Especially with any facts!;)
Your last paragraph mirrors what I tried to say.  Distribution type was immaterial to my thinking.  The original question about use of non-normal data points to a real need to understand what is going on, not the distribution type, as the measurements are clearly at risk.

0
#82939

Chip Hewette
Participant

Why would the data from a gage be abnormally distributed?  Only if there were special causes of variation!  One can choose to ignore data containing special causes, but this needlessly confuses the situation.
How do you know the data is not normal?  Can you state with high confidence that one or more of the measurements are unusual?  Can you identify a cause for this unusual nature?  If so, should this cause not be eliminated prior to using the gage?
Consider the sources of variation within your measurement system.  How the part is secured?  How the gage is zeroed prior to use?  The temperature of the room?  How the operator interprets the numbers?  A difficult dimension to acquire using the gage?  Are any of these reasons likely to be the cause of abnormal data?
Classic GR&R methods require that the data be acquired and evaluated against the specification.  If the variation exceeds a percentage of that specification width, the gage must be improved.  This is a preliminary DMAIC project within the “M” of the original project that will likely identify the special causes to be eliminated.

0
#82900

Chip Hewette
Participant

If you are testing six of method A vs. six of
method B, creating six replicates of each factor
level, it is better to evaluate each individual
measurement for ‘quality.’ What is the range of
values for the six observations? Is this range as
expected? Is one value way different from all the
others? Why? The MBB is correct in a sense,
that the observations must all be of high quality
and truly represent the factor. This is not the
same as requiring all six observations to fit on a
normal distribution line.

0
#82881

Chip Hewette
Participant

One should always seek to link upstream processes with downstream measures through proper experimentation.
If upstream process A is allowed to vary naturally, determine the bounds of that natural process.  Then, create samples or choose samples at those bounds.  How do components with these upstream values affect downstream measure B?  Use statistical experimentation techniques to identify if these levels create a meaningful difference in measure B.
Without evidence of longterm process average and range, setting limits can be tricky.

0
#82875

Chip Hewette
Participant

Marc…any chance you worked at Copeland?

0
#82869

Chip Hewette
Participant

It appears that you are reviewing the GR&R output from a statistical package, and looking at the components of variation attributed to (a)repeatability, (b)reproducibility, and (c)part-to-part.  When the failed part is measured at zero volts, and nine other parts are measured at 15 vDC, the variation in the measurement system attributed to part-to-part variation is quite high.  Of course, the variation for R&R then goes down, as all must sum to 100%.  A GR&R at 70% with the failed part removed indicates to me that you have not put the specification for voltage in the software, and that you are seeing only the proportion of observed variation, not the percentage of the specification width absorbed by measurement system variation.
To evaluate the ATE, one must acquire a set of parts having a reasonable part-to-part variation.  If ZERO volts is reasonable, in that your component failure rate is such that the ATE would see ZERO, then you can include this type of failed part in the study.  However, in so doing you are reducing the likelihood of either a repeatability error or a reproducibility error, as most measurement systems can reliably detect a ZERO.  This is not studying the discrimination of the ATE properly.  The ability to detect a failure must be evaluated against a binomial distribution, and not against some variable measurement where the output is far above zero.  Therefore failure measurements (ZERO vDC) should be tested using another method.  A good Quality Manager would have two numbers to describe the ATE.  One would describe the GR&R for the 15 vDC specification.  One would describe the gage’s ability to detect a failure 100% of the time.
If you are following the traditional method of ten good parts evaluated by three operators randomly, then you can plug and chug through the variation estimates.  The percent GR&R reported is only valuable with respect to a specification width…e.g.15 vDC plus or minus 0.25 vDC.

0
#82815

Chip Hewette
Participant

Consider this free-lance consulting, but don’t do it for free.  People don’t value things that are free.  They value a ‘deal.’
It would be very difficult to do your first project for a company or industry with which you have zero familiarity.  In your job transition, think through your expertise and develop a list of target companies having that need.  Don’t try your first BB project and attempt to learn a new technology.
Once you have your BB wheels, then you can continue the consulting in other industries.

0
#82665

Chip Hewette
Participant

Is there one and only one defect (nonconformance) possible for each item checked?
Is there an automatic gage that determines if an item is defective or not?
Are you checking the quantity of rejects in an off-line collection point?
Are you unaware of the exact production quantity for that number of rejects?

0
#82664

Chip Hewette
Participant

Amen on looking for out-of-control points on the R chart.  No question that the average doesn’t tell it all!

0
#82650

Chip Hewette
Participant

Suggest you look at titles on Bank of America job postings.

0
#82519

Chip Hewette
Participant

Please discuss the risk to production quality with the business leaders before making any changes.  What would happen if the gage calibration interval were expanded, and a gage were found out of calibration after this longer period of time?  What financial impact would this have?  What ‘safety nets’ exist in current gaging practice?  What is the reaction plan for segregating components, subassemblies, or final assemblies?  What lot control do you have to be able to recall items at risk?
Key question is not how to save money, but how to ensure customers are delighted.

0
#82509

Chip Hewette
Participant

First, I hoped that studying the time until stability occurs would allow the process owner to set a simple, factory-proof work instruction that parts prior to this time were to be segregated, inspected, re-inspected, scrapped, or whatever.  Factories are not always the best place to be wishy-washy or unclear with work instructions.
Second, with regard to time to stability, it is often best to transform the data using a logarithmic approach.  Time is of course bounded by zero (unless you watch too much Star Trek).  So, if you have a set of time data it is naturally skewed.  If one calculates a standard deviation based on skewed data, interpretative errors could be made.  The log transform allows the data to be evaluated for normality.  If the data were lognormally distributed, one could infer that within the realm of sampling the process was full of many random event that made the process unstable.  If, however, the data showed abnormality the process owner could seek to make improvements by eliminating special causes.
Third, I hoped that DMAIC could provide an answer as to the instability.  If it is unstable at the beginning, why would we assume that it is always stable later?  If we don’t know what makes it unstable, how can anyone say “Oh, we just know it is stable now.”
My recommendation in these situations is to stand in front of a mirror and pretend you are talking to the end customer.  If you can’t explain it to that person, seek to make the process understood.

0
#82508

Chip Hewette
Participant

I was just seeking to clarify the original question with my statement that a value for sigma is not on the X-Bar / Range chart.  It is not calculated from a single subgroup on this type of chart.  I am aware of the two graphs on X-Bar and Range charts.

0
#82499

Chip Hewette
Participant

Could you clarify what you mean by “shift in sigma?”  I’m unaware of a sigma value on the X-Bar/Range Chart.
If you are asking “how do I determine if what I did made a difference?” you may need to use another tool.  Charts are often slow to respond to experimentation.
If you have a long run chart showing X-bar and R-bar, from which you gather that the process sigma is estimated by R-bar / D2, this information can be used to see if you have effected a meaningful change.  However, experiments are best used with replication within the experiment to determine control limits for the effects.
If you are asking “how do I know if moving the control knob on my process made a difference using a control chart?” one has to make judgments.  Are the parts within external customer specification?  Are the parts in conformance with contractual requirements?  Can the machine (process) run safely at the new setting?  If so, one can practice an evolutionary approach by slowing changing the process knob.  Do a search on EVOP to learn more about this technique.  You won’t be able to prove a difference occurred within one subgroup, but you will over time.

0
#82477

Chip Hewette
Participant

An unstable process can still operate within customer specifications.  Does yours?  It sounds like you are worried that some parts will not be in specification.
100% inspection is not always effective.  Sampling is not appropriate to find random errors.  Think of the statistics.  If we measure every tenth part, we have a 1 in 10 chance of finding a random event.  We could close our eyes and say “everything in between these two parts must be OK” but I sure wouldn’t want to be the customer!
I would suggest the following:
a.  Measure the time when the process stabilizes for the next thirty processes.  Calculate a mean and a lognormal standard deviation for this time data.  Evaluate.  Is the set of numbers lognormally distributed?  If so, what is the maximum?  Can you infer that parts produced after this maximum time are from a stable process?
b.  Many fundamental laws of the universe may be forcing the initial instability, but consider a DMAIC project on just this facet of production.  What are the likely causes for instability?  Can you control these at all?  Can you reduce the time of instability?
c.  Inspect the parts until stability commences!

0
#82397

Chip Hewette
Participant

I hope you can succeed!
Have you followed DMAIC in this project?  Poka-yokes are great, but have you identified the issues at hand?  Pardon me if you’ve already considered the following…
Missing dimensions are often found when the drawing reaches the fabrication shop.  Who are other customers of the drawing?  What do they complain about?
Be careful not to listen to everyone who says “This error happens all the time.”  Most humans are very good at noticing a “pet” error but also very good at ignoring other errors.
Collect data!
Since you have ONE drawing with multiple potential errors, be sure to use the correct distribution to study the data.

0
#82395

Chip Hewette
Participant

Are you speaking of validation of a device or medicine to be equivalent to a currently used device or medicine?
For example, let’s say a currently used device can detect 10 cases of cancer when there are truly 12 cases of cancer.  Based on this poor performance someone makes a new device to find the 12 true cases of cancer.  Are you looking for information to calculate the minimum sample size to determine if the new device’s ability to find cancer is at least equal to 10 out of 12?

0
#59727

Chip Hewette
Participant

Are you speaking of validation of a device or medicine to be equivalent to a currently used device or medicine?
For example, let’s say a currently used device can detect 10 cases of cancer when there are truly 12 cases of cancer.  Based on this poor performance someone makes a new device to find the 12 true cases of cancer.  Are you looking for information to calculate the minimum sample size to determine if the new device’s ability to find cancer is at least equal to 10 out of 12?

0
#82266

Chip Hewette
Participant

In the DMAIC flow of events, it may be necessary to present the Gage R&R as something of great importance to the internal or external customer.  To present the results, try to simplify by showing (1)the item being measured, (2)the specified dimensions, and (3)the proportion of the width of the specifications absorbed by the variation of the gage.  Add a reference line showing either 20% or 30% as a boundary, and declare the measurement system “Good” or “Bad.”
If the measurement system is bad, perform a DMAIC project on the measurement system.  Then, return to the first project.

0
#82265

Chip Hewette
Participant

Control charts don’t have to have a single X double bar value.  Control can be monitored as the average moves continuously lower and lower.  The key is to understand the “expected” shift downwards, and alert the operator when the individual value or subgroup average exceeds the control limits based on the expected trend downward.
Understanding the likely causal factors of (a)raw material metallurgy, (b)cutter metallurgy, (c)cutter geometry, (d)part rotational speed, (e)cutter feed, and (f)phase of the moon (ha!) may be of the most benefit.  One or more of these factors may cause a sudden shift away from the expected length dimension.
If you monitor and control these factors, you may be able to simply put a rule in place to change the cutter every xxx parts, or every so many hours.  If you put a tool resharpening program in place (or insert changeover plan), you can create a kanban (container) of ready to go cutting tools so that the line operator is never waiting for tooling.
It would be helpful to have a simple cutting tool length gage on hand, so that the required offset can be read off of a dial indicator.  If you make the gage so that it references one side of the length spec, you may be able to create all positive offset values.  Then, the work instruction would read “Cut xxx parts.  Remove cutting tool.  Obtain new cutting tool.  Gage.  Key in offset value in line 1245 of program Z.  Load tool into holder.  Make five parts.  Gage part for length.  Confirm offset.  Run xxx parts.”

0
#82235

Chip Hewette
Participant

I suggest that a practical review of the engine crankshaft tolerances be performed first.  It is not always possible to have an operating engine with all combinations of part tolerances.  Putting component tolerances in any orthogonal array can be a disaster, in that the engine would self-destruct if certain part dimensions combine.
It makes more sense to me to study the dimensional relationships than to study the part dimensions.  If a given range of rod journal to crank journal clearance is acceptable for noise, performance, and durability, part dimensions and tolerances can be calculated to achieve the range.
By studying the functional relationships, you could also study the effect of having one or more “bad” clearances on a multi-cylinder engine.
If disassembly is not possible, you may need to consider bench tests of the components before investing in the engine build.

0
#82230

Chip Hewette
Participant

Home Depot is a prime example of a full commitment to Six Sigma in retailing.  There are other companies considering Six Sigma.

0
#82153

Chip Hewette
Participant

Although I’ve never read the cited work, I have read Dr. Wheeler’s book Understanding Variation, and sat in a private seminar given by Dr. Wheeler at my employer.  Is this like staying at a Holiday Inn Express?
Perhaps the meaning is simply that converting from Cpk to dpmo for the population is inaccurate unless the underlying distribution of the population is known, not assumed.
I don’t think that Dr. Wheeler believes that the areas under a distribution curve are unknown, but that people make erroneous assumptions about which curve to apply.
For example, time and time again I have seen BB attempt to use a normal distribution on a time measurement.  The assumption that time follows a normal distribution is in error, but the BB go merrily on calculating a Z-score and ignoring the obvious.

0
#82148

Chip Hewette
Participant

You might contact Dr. Robert Mees at the University of Tennessee, Knoxville to inquire about master’s theses in continuous improvement.

0
#82132

Chip Hewette
Participant

Maybe…Develop through background research the set of events / treatments to treat ‘defect.’Study the variety in treatments (some doctors use one medicine, some use another, etc.)Establish using normal quality chart methods (Pareto, histogram)  the ‘normal’ response to the defect.  Obtain buy-in from champion and process owner that this is the ‘normal’ or ‘nominal’ treatment set.  Explain that by learning the normal treatment set you are going to be able to create the best estimate for bottom line costs and target a meaningful reduction.Talk to Finance.  Obtain cost for each treatment, separately.  Aggregate each treatment into a set cost.  Cross check with researched amount of \$25K to \$50K.Study ‘defect’ rate historically.  Ensure that the correct distribution assumption is made.  Don’t use a normal distribution if a poisson is better.Link the defect rate with the set of treatments and associated costs.  Calculate the estimated annual loss to the health care provider.Get historical data on the number of each separate treatment per unit time, as some of these may not be related to the ‘defect.’Normalize the historical data vs. admissions, seasonality, etc.Make process improvements.Track the number of each separate treatment in the hospital, keeping in mind the expected values based on admissions, seasonality, etc.Watch the ‘defect’ rate, as well as the separate treatment occurrences per unit time.  If your improvements are real, those treatments associated with the defect should go down.Calculate the cost reduction based on the new rates.

0
#82118

Chip Hewette
Participant

If you have a sponsor who would appreciate a structured improvement project within his span of control, ask that person.  A green belt or black belt can uncover potential projects, but should not be the project owner.  He should be the project leader, of course, but not the owner.  What questions does the sponsor face to which there are no apparent easy answers?  What areas within that span of control have historical measures?  What areas can be tied to a financially verifiable measure, or one relating to external customer satisfaction?  Does the sponsor have the ability and authority to move obstacles out of the black belt’s way?
Without a project sponsor or champion, one is swimming upstream!

0
#82083

Chip Hewette
Participant

There is an absolute maximum length of time your company desires to spend on a design project.  As you stated, there is no lower limit, aside from the practical space-time continuum boundary.  Some internal customer, perhaps the VP of  Engineering, could set the desired maximum length of time.
Granted, each project may fit into various types, ranging from breaking fundamental laws of the universe to simple modification of an existing product.  For each type of project, there is a maximum time limit.  There are probably three or four types of projects in your department.
For that arbitrary USL, based on the needs of the company and its external customers, one can compare past historical project durations.  Chances are some projects exceed the USL.  What proportion of projects exceed that USL?  This can be stated in DPMO.  Although years ago, no one was attempting to meet some USL, the historical data would be quite useful in establishing a benchmark.
From the DPMO, if desired, one could calculate a sigma level.  In this case, it would seem to be ‘long term.’ and it would also appear that ‘short term’ does not apply.
If one simply reports the DPMO of project durations (by type of project), the team can set reasonable 6S improvement goals.  One can easily quantify the dollar savings based on the DPMO by applying a cost per unit time to the project.  This cost is likely a weighted average of engineer cost, technician cost, overhead cost, etc.
From a DPMO and \$ point of view, each year’s review of the department would demonstrate the impact of projects.

0
#82004

Chip Hewette
Participant

Cpk is often used to give people a sense of ‘goodness’ about a production process.  Calculating Cpk based on a single sample from a process is, in my view, an incorrect use of the Cpk calculation.  If one is lucky, and all the parts are similar, the Cpk looks good.  Later, when the process drifts, the customer calls and says “Why are you sending me bad parts with a Cpk of 1.44?”  This is not an enjoyable conversation.
Cpk is derived from the estimate for the standard deviation, and in control chart methods is based on R bar.  One must have enough subgroups to calculate R bar.  36 samples, of themselves, don’t really show the process owner all the sources of variation.
I suggest that you develop a control chart suitable for the process and key measure, and subgroup properly to calculate R bar.  With enough time, you may see special causes that can be fixed prior to presenting a customer with a Cpk value based on a reasonable length production run.

0
#81998

Chip Hewette
Participant

Bank of America has 6S.

0
#81993

Chip Hewette
Participant

In a meeting, I once tossed a coin eight times and got heads every time.  Did not succeed in demonstrating randomness!

0
#81918

Chip Hewette
Participant

1.  What are you measuring within the 1264 accounts?
2.  What do you want to know about that measurement?  The average value for the measurement from a sample within 1264 accounts?
3.  Do you want to detect a 5 percentage point shift from an average historical value to a ‘new’ value?

0
#81916

Chip Hewette
Participant

As usual, it depends.  Diesel efficiency and emissions are affected by the fuel injection system.  Direct and indirect injection systems exist.
Diesels are ignited by extraordinary compression of a ‘perfect’ gas, the temperature of which is raised to the point where atomized fuel ignites.

0
#81904

Chip Hewette
Participant

There is an old Andy Griffith episode where a new deputy takes all the past data and predicts that a certain law would be broken near midnight on a certain date.  The episode pokes fun at science and statistics, with good reason.  It is very difficult to make predictions!
I assume your past data contains information that some would consider ‘predictors’ and ‘responses.’
If you study the responses first, using a time-based view, you can evaluate if the responses have any sort of distribution.  An individuals and moving range chart is a good starting point.  Chances are the responses will be quite random, and have a wider distribution than first imagined.  If necessary, transform the responses to ‘difference from expected’ values to see if the differences are normally distributed. Understand your response data first.
If you have many predictors associated with each response, some modeling could be useful.  However, modeling without good business process knowledge leads to erroneous conclusions.  Some predictors can actually be responses to business conditions, yet when put in the analysis as a predictor apppear very, very important.  (Lack of knowledge about the business leads some analysts to put data in the wrong category.)  Of course they would appear to be very important, as they are essentially the same as the response data.  Be very careful!

0
#81902

Chip Hewette
Participant

Your Gage R&R must encompass all sources of measurement variation.  In a visual test, operator skill and judgement is key.  How many “skilled operators” do you have on staff?  If only one, who is the backup operator?  Who is the engineer that ordered the equipment, or the process engineer of the department using the gage?  These may be suitable operators for the study.  You may use more than three operators.
If a defect is detected at 50 to 80% of the screen height, it seems that the true test of the gage is where each operator determines that boundary.
Is there any way to create gage calibration parts at 40%, 50%, 60%, 70%, and 80% of screen height?  Using a range of values would give the study far more credence.  If, for example, half of the time operators can’t tell the difference between 40% (good) and 50% (bad) the gage is worthless.  Similarly, if half of the time operators call 60% good and half of the time they call 60% bad, the gage is worthless.
In this case, I would prefer five parts of various screen height measured by at least two operators at least three different times.  I would do a Monday study (early in the morning), Wednesday study, and Friday study (late in the day).  I would NOT mark the parts in a way that makes any link with their supposed ‘true’ value.  I’d attempt to label the parts as if they were production parts, with a production tag, and each submission would have a different part number (or whatever).
The key to a good measurement study is asking the right questions.  It is not about filling in the blanks on the gage R&R form.  What do you really care about?  What are the sources of variation in the measurement?  How can you capture those sources of variation in your study?  How can you use this study to help your customers (internal and external) trust the gage for production use?

0
#81848

Chip Hewette
Participant

Thanks for the additional info on Q-charts.  I guess the 30 subgroup rule comes from that asymptotic approach to limits calculated at an infinite number of subgroups.

0
#81816

Chip Hewette
Participant

First, congratulations on implementing control chart techniques.  These are valuable tools and you are on the right track.
Second, much theoretical discussion aside, if the process Cpk >1.0 and Cpk < 1.33, the process is on the ragged edge.
Third, one should create reaction plans for ensuring product quality based on the sample data.  What would you do if eight sample averages in consecutive order were above the calculated long run process average?  There are many other conditions that should raise the red flag and require a studied response.
Fourth, the sample plan should not be based on the Cpk.  It should be based on an earnest desire to capture variation.  Sample subgroups should be of nearly similar production subgroups, and the interval between samples should be long enough to showcase other sources of process variation.  Increasing the number of samples would be very helpful in measuring the ‘within group’ standard deviation.  Decreasing the interval between samples might be a waste, and camouflage the sources of real variation.
Fifth, if this is a new process, one should have at least 30 subgroups in hand before calculating an process limits, and at least 300 initial samples measured for this critical characteristic.
Sixth, since you state it is a new process, please question if the historical data has enough variation ‘baked in.’  Are all shifts represented?  All operators?  All process inputs?  Chemicals?  Make sure you don’t use data from a limited viewpoint to make important decisions.

0
#81814

Chip Hewette
Participant

Think of attribute measurement studies as a way to describe the randomness or error in the attribute inspection.  An old QC saying is “100% inspection is 85% effective.”
One must also consider how many defects or nonconformances exist generally, in the process of interest.  If the number of defects is very small, relative to the production of goods or services, then the attribute measurement system study should be expanded to ensure that the truth can be measured.  If, for instance, the defect rate is 5,000 dpmo, and the attribute measurement system shows error of 4,000 dpmo, there is a barrier to improvement.  Conversely, if the defect rate is 30,000 dpmo, the same measurement system would likely show improvement.
Don’t forget if you have products or services with multiple opportunities for nonconformances that the measurement system study must encompass these opportunities.  10 objects with 10 possible mistakes gives the measurer 100 opportunities.

0
#81772

Chip Hewette
Participant

May I suggest that you study your independent measures first?  Do you know that you have a wide range of the three related measures of ability?  Do you have enough differences in these three measures to at least have one observation at each end of their respective scales?
If you cannot prove that the three ‘different’ groups of individuals are truly different in the ‘predictors,’ why go any further?
If predictors are highly correlated, why not pick the best predictor for analysis?  This would be especially true if the corrective measure for improving reading comprehension is unified (i.e. go to a single class called “Reading”).  How can you improve vocabulary knowledge and not grammar knowledge?  How can you improve vocabulary knowledge and not phonological knowledge?  These measures of the spoken language are by nature correlated.

0
#81771

Chip Hewette
Participant

With various defects in one item, is this not a pass-fail situation?  Should not the inspector find all defects?  If all are not found, is this not a ‘fail?’
One can put 10 to 30 items in front of 3 inspectors and see how repeatable the inspectors are very easily.  There is but one ‘truth’ and with visual defects, a wide range of possible outcomes based on the inspector’s viewpoint.
What is important is testing the pool of inspectors with realistic and complete examples of the various defects.

0
#81770

Chip Hewette
Participant

One must understand the factors of interest first.  There are many many fundamental factors for internal combustion efficiency.  There is a theoretical maximum efficiency as well.
There are also simple maintenance issues that affect fuel economy.  Sometimes bringing an engine back to its specified original condition will help economy.
It is often a waste of DOE energy to attack a complex system without understanding the basics and verifying condition.
Whatever you do, don’t try any special spark plugs. ;)

0
#81684

Chip Hewette
Participant

Thanks for this excellent technical ‘how-to.’  I hope to find the referenced article for the full write-up.  Non-normal data is common in transactional measurements with a hard boundary condition, such as time=0.  Perhaps this transformation will help the originator of this thread find the truth.

0
Viewing 49 posts - 1 through 49 (of 49 total)