Calibration Rules
Six Sigma – iSixSigma › Forums › Old Forums › General › Calibration Rules
- This topic has 11 replies, 7 voices, and was last updated 20 years, 9 months ago by
Graeme.
-
AuthorPosts
-
September 14, 2001 at 4:00 am #68715
If you do a gage RR study on the instrument then look at the gage (measurement) linearity over time. This should show you how long the gage is remaining within whatever LSL and USL are required for your process. You may need to perform a few gage RR studies – suggest that you work within the current recalibration timeframe.
0September 14, 2001 at 4:00 am #27836What are some rules for a good gage calibration schedule? I know that if you are making adjustments every time the gage is calibrated, you aren’t doing it often enough. Are there other rules of thumb??
Thanks.0September 15, 2001 at 4:00 am #68719
Ken MyersParticipant@Ken-MyersInclude @Ken-Myers in your post and this person will
be notified via email.JC,
Concerning the calibration of a gage, and making adjustments: Ask yourself a question–when measuring a reference or standard would you expect to measure the exact value every time if the gage were calibrated???
What considerations have you made for the inherent gage variability when performing the gage calibration? More important, if the gage was already calibrated, hypothetically speaking, and you measured a different value than labeled on the standard, is it prudent to make a calibration adjustment to the gage? If you adjust the gage in this instance, would such an adjustment increase or decrease the inherent gage variability? or, would it simply remain the same?
Here’s a thought: construct a Process Behavior Chart, (aka SPC Control Chart of Individuals), for most of the past pre-calibration measures for a given gage using the same standard over the last year or so. Step back and look at the run chart. Are all of the values within the behavior limits(aka Control Limits)? If they are use these limits as a reference for the next calibration check. When performing the next calibration check compare the measured value to the behavior limits of the established behavior chart. If the value is inside the limits, then make no adjustments to the gage. If the value is outside the limits, then make and note your change on the behavior chart. After some period of time sit down with this chart and review the frequency of calibration changes made to the gage. The average frequency of gage calibration changes less one day should be your new established calibration frequency. Give this some thought, and forward me back your questions.
Ken0September 15, 2001 at 4:00 am #68724Ken,
Thanks for the response. I am working with a company who recently ended their consulting relationship and is attempting to become self-sufficient after training 6 MBB’s. My role is to help them personalize the training materials, add company specific examples, etc. We are taking this opportunity to also address some of the issues that frequently come up in class or projects that aren’t addresssed in the material or explained to the level of detail they would like. We are all learning and I am using this forum for other opinions so I can be sure that the additions/changes to the material are accurate and helpful.
Your posting on capability indices was particularly helpful. I have also consulted Breyfogle’s text for support.
I have worked with BB’s who had measurement devices that were entirely automated (no operator variability to speak of) and where one device did all lthe checking. Under those circumstances, it is difficult to put together a GRR that makes sense. I usually instruct those people to check the calibration schedules and keep them current to avoid drift in their measurements. This raises the question…”what is a good calibration schedule? How often is often enough?” You’ve given me some more food for thought. I will definitely add your suggestions to the speaker notes so the presenter has the addtional suggestions at hand. If there is anything else you can think of that might help, I’d be interested.
Thanks again,
Joy ([email protected])0September 16, 2001 at 4:00 am #68725
Ken MyersParticipant@Ken-MyersInclude @Ken-Myers in your post and this person will
be notified via email.Joy,
Glad to be of some help! Take care…
Regards,
Ken0September 17, 2001 at 4:00 am #68761
Jim ParnellaParticipant@Jim-ParnellaInclude @Jim-Parnella in your post and this person will
be notified via email.I tend to agree with Ken with one exception. After your control chart is made and implemented, I’d add a 2-Sigma “warning limit” IN ADDITION TO the regular 3-sigma limit. If any point exceeds the 2-sigma limit, I would do a retest. If this retest also exceeded the 2-sigma limit, or even came very close to it, I would do the re-calibration. Reason: 2 consecutive points at or beyond 2-sigma is even a stronger out-of-control signal than one 3-sigma signal.
Ken, if you read this, let me know why you take the average number of days, minus one day.
Jim0September 17, 2001 at 4:00 am #68765
Ken MyersParticipant@Ken-MyersInclude @Ken-Myers in your post and this person will
be notified via email.Jim,
Thanks for your input. I am concerned when we begin adding additional features like 2-sigma warning limits onto a run chart of, in this case, assumed normally distributed measurement data we could elevate the Type I error. However, your recommendation to perform a recheck will usually minimize the built-in Type I error resulting from this exercise. Therefore, I’m not too concerned with 2-sigma warning limits as long as a recheck is done per your recommendations.
Per you last question on the calibration timing, my thoughts are as follows:
1. My goal on new measurement systems is to capture as close to the actual time as possible when the measurement system would naturally drift out of calibration under normal use.
2. I do not actually know the actual drift timing, because my check period sets the sensitivity for detecting this drift. Ideally, for a new measurement system, I would like to evaluate it on much shorter periods than that set for the other systems.
3. I want to build in some limited assurance of maintaining a continuously calibrated system for continuous manfacturing operations while minimizing the chance of sending out product that is outside the spec.
With the above as a ground rules in place, in order to establish the calibration interval for a new measurement system I would evaluate the system routinely throughout the day. If possible, up to 3 times per day over a period of one to two weeks, or until a drift is observed using a run chart with dynamically adjusted control limits. After observing an apparent drift, as indicated on the control chart, I would note the number of days I operated without a system calibration. I subtract one day from that number to provide a cushion against the timing confidence, as I’m sure there will be some error in estimating the drift time. For greater calibration interval assurance one may use up to 3 days difference from the estimated time determined from the run chart. What we don’t want to do is set a calibration interval that is sure to find the system out of calibration for the better part of a day’s manufacturing.
Jim, let me know if you think this logic makes sense. I’ve found in past work when using this approach that I can achieve what I call “Just in Time Calibration” while minimizing the chance of sending our product that is out of specification. Usually, this method establishes the most economic interval possible.
One last note, all system wear down, including measurement systems. Therefore, one should consider re-evaluating a given measurement system using the prescribed methods more than once in the lifetime of the measurement system. For instance, I would advocate a full capability study to be done on a pair of calipers once ever 1 to 1 1/2 years. Laser micrometers may go out to 2-3 years. The idea here is that you are trusting these systems to give you accurate information each time you use them. A period check will minimize the chance of receiving unwelcomed surprises.
Ken0September 18, 2001 at 4:00 am #68773
Jim ParnellaParticipant@Jim-ParnellaInclude @Jim-Parnella in your post and this person will
be notified via email.Ken, thanks for your response. I don’t like 2-sigma limits either, but in this case (automatic retesting) I think we agree that they are OK, especially when talking about something as important as calibration.
Your logic of point number 3 (on the one day approach) makes sense. Your initial post however confused me when you stated the AVERAGE number of days. If it was one day sooner than the average number of days, your recalibration would be late almost 50% of the time. Thanks for clarifying that.
Jim0September 19, 2001 at 4:00 am #68796
Ged BryantParticipant@Ged-BryantInclude @Ged-Bryant in your post and this person will
be notified via email.I suggest the Joe Juran “Quality Control Handbook” McGraw Hill Publisher. It will walk you through this and a thousand other questions..
0September 19, 2001 at 4:00 am #68801
Ken MyersParticipant@Ken-MyersInclude @Ken-Myers in your post and this person will
be notified via email.Jim,
Thanks for the reply and clarification on your question. You are correct, it’s not the average number of days, but rather the total observed days. I believe the word “average” slipped out because I was thinking of the centerline of the control chart. However, I realize that the chart I recommended was an Individuals chart. Therefore, we are looking at all the measured values. Good spot… Keep me honest! Sorry for the confusion…
Ken0September 20, 2001 at 4:00 am #68817
Ulf ChristiansenMember@Ulf-ChristiansenInclude @Ulf-Christiansen in your post and this person will
be notified via email.If the tool are out of tolerance two times in row, then cut the calibration interval in two
If its inside the tolerance 3 times in row then dobbel the calibration interval
Ulf0September 23, 2001 at 4:00 am #68850The subject of calibration intervals is covered in-depth in NCSL Recommended Practice #1, “Establishing and Adjusting Calibration Intervals”. A number of different methods are discussed and analyzed. Methods range from overly simplistic (changing the interval depending on the results of the most recent calibration) to highly complex mathematical reliability studies. Each method has a detailed analysis of its positive and negative factors, to help you choose one appropriate for the amount of data you can reasonably collect. You can order this from NCSL International ( http://www.ncslinternational.org ).
The biggest problem with determining a “proper” calibration interval for a class of tools as used in your facility is the fact that you are using a small number of samples, obtained at infrequent and extended intervals, out of a small population. You have to balance your desired reliability level and the desire for statitically meaningful numbers, against the time and cost of doing the calibration and obtaining the data. It is quite possible to have to make an arbitrary judgement becasue the time required to obtain statistically significant data far exceeds the useful lifetime of the tool.
Adjustment – the rule where I work is that if the performance during calibration is within the central 50% of the specification band, do not adjust the tool.0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.