iSixSigma

Control Limits – Point of Diminishing Return?

Six Sigma – iSixSigma Forums Old Forums General Control Limits – Point of Diminishing Return?

This topic contains 40 replies, has 8 voices, and was last updated by  Bill C 12 years, 4 months ago.

Viewing 41 posts - 1 through 41 (of 41 total)
  • Author
    Posts
  • #47165

    Bill C
    Participant

    If all goes to plan with SPC, you will reduce variation and maintain a process that is on target. As you improve the capability of the process and your cpk climbs to 2 and beyond, is there a point at which you lock in your control limits rather than tightening them?
    I realize that spec limits are the voice of the customer and the control limits are the voice of the process. No need to explain that!
    There must be a point at which you leave the control limits alone and run the risk of type II error, where the type II error does not really present any “consumer’s” risk. Any thoughts or rules of thumb?

    0
    #156938

    Jim Shelor
    Participant

    Bill,
    The methodology for Shewhart control charts is as soon as you have sufficient data to establish trial control limits (e.g. >25 points for an X-bar,R chart or >30 points for an I,mR chart) you establish the trial limits and lock them in.
    As you continue to chart your process, you leave the control limits locked unless you get an indicator that the process may out of control and not returning to normal.  Example:

    One or more points >UCL or <LCL with no discernable reason requires investigation but not necessarily revision of the control limits.
    6 points in a row increasing or decreasing may be reason to consider revising control limits, but only if corrective action does not cause the trend to return toward normal, the trend does not return toward normal without corrective action, and the trent appears to be establishing a new permanent, significantly different, mean.
    9 points in a row > or < the current target, but only if….(see 6 points above).
    Otherwise, the control limits should stay locked until one of these, or the other control chart tests (whichever you decide to use in your monitoring plan) is demonstrated.
    Grant & Leavenworth suggest that control limits should be reviewed periodically (not defined) to ensure that the limits in use are providing you with accurate indications of a stable/predictable process.
    I hope this helps.
    Jim Shelor

    0
    #156941

    Bill C
    Participant

    Jim,
    Thanks for the feedback. I was actually looking for different information though.
    As a process matures and you go through iterations of reducing variation, tigthening control limits, reducing varation ,  tightening control limits, etc. , there must come a point in time where it is not economically feasible to tighten the limits. Assume an extreme case where  you have a cpk of 3.0.  Some would say that SPC has done it’s job at this point and is no longer needed.
    Any thoughts on this?

    0
    #156942

    Mikel
    Member

    Bad advice. You only change limits when you have evidence the process has improved.

    0
    #156960

    Bill C
    Participant

    Stan,
    Do you ever get to the point where your capability is so high that you refrain from tightening the limits?  You start you with a mediocre Cpk. Over time you make process improvements and you have tightened your limits a few times. The Cpk eventually becomes 3.0. You evaluate new limits and find that the statistically based limits are extremely tight.
    Do you decide NOT to tighten the limits?

    0
    #156969

    Jim Shelor
    Participant

    Stan,
    I do not believe that you ONLY revise control limits when the process improves.
    That being said, operating in a method to only revise the control limits when the process has improved makes logical and common sense to me, so I am not prepared to dispute that method of operation.  That method can certainly be defended.
    Jim Shelor

    0
    #156972

    Jim Shelor
    Participant

    Bill,
    In my opinion, the point of diminishing returns occurs when to make an improvement in the product will have a negative ROI, it will cause a reduction in the bottom line.
    In the case of an improvement that produces a negative ROI, the customer should be queried to determine if the customer wants the improvement and whether of not the customer is willing to pay for the improvement (an increase in the price of the product).  As long as the product is meeting or exceeding the customer’s requirements, an improvement beyond that is an economic decision.
    If the company is willing to accept the risk that the improvement will gain market share, improving the bottom line, the company may decide to proceed with the improvement regardless of the customer’s willings to pay for the improvement.
    When the decision is made not to make continued improvements to the product, by default, you have made the decision to stop tightening the control limits.
    Jim Shelor

    0
    #156973

    Savage
    Participant

    Yes, there is a point of diminishing returns. If you can spend your resources on something else and have larger gains, it is obvious to work on the other projects. If you have done this to all projects and your organization has reached the point at which spending $1.00 to improve a process only leads to $1.00 in savings, why improve?

    0
    #156974

    Mikel
    Member

    The simple answer is yes and I believe it comes before you reach a Cpk of 2. You replace the control chart with simple things like go/no go devices that are not set to spec, but to the known capability. Rules are simple – if you fail the device, stop the line.
    Mistake proofing devices can also eliminate the need for charts.
    This is not about ROI like Mr. Shelor is suggesting. Rules for chartins are simple –
    1) Make sure it is one of the few true inputs that affect the few true customer CTQ’s
    2) Make sure there is not a better way to do it, such as mistake proofing, gauging set up to reflect good capability, etc.
    3) Focus the charts on the things known to influence out of control conditions such as set ups, material lot changes, personnel changes, …
    4) Know how often the chart has historically gone out of control and sample only 6 – 10 times more often than that.
    5) Demand root cause investigation for all out of control situations.
    6) Move controls upstream to eliminate downstream detection.
    If I knew I was achieving a Ppk > 1.5, I would not have a chart in place.
     

    0
    #156975

    Mikel
    Member

    What does a control chart have to do with investing in improvment – these are two distinct issues.

    0
    #156980

    Bill C
    Participant

    Thanks to all for contributing. I think Stan addressed the specific question I was asking. The point about Ppk is well taken.
    If I were to put this in laymans terms:
    Your doctor wants you to keep your systolic blood pressure at 120 +/- 20
    You are a nerd (ha ha) so you begin to use an IM/R chart.
    Your phase 1 control limits come out to 120 +/-15. One day you obtained a reading of 138 so you go to the ER. They said, good job nerd, it was critical that you “shut down your day” and came here. You find that your diet was the root cause of this excursion and they sent you home with some meds.
    Over time, you exercise and improve your diet. You re-evaluate your control limits, and they now calculate at 120 +/- 10. The next day you obtain a reading of 131 so you go to the ER. They said, good job, but you might be a little overcautious. Did you re-measure your BP to make sure it was a valid reading? You said no in embarrassment, and were sent home with no meds.
    You train for an ironman triatholon and your BP is extremely stable and tight. You re-calculate your limits and they are estimated to be 120 +/-5. You think of how embarrassing it was the last time, so you decide to leave your reaction limits +/- 10 and you assure yourself that you will double check your reading, perhaps wait a few hours and re-measure, etc.
    Had you decided otherwise, you would run the the doctor at every reading above 125 and you would be referred to a doctor who focuses on an organ in between your temples!

    0
    #156988

    Jim Shelor
    Participant

    Stan,
    Whether or not to improve the product is clearly an economic issue.  If you are going to lose money by performing an improvement, you should not perform the improvement, assuming you are meeting the customer specifications.
    Control charting is clearly a completely separate issue.  Control charting is a matter of ensuring the process is operating in a stable manner.
    Not control charting a process because you found that the process is performing at a Ppk = 1.5, is a mistake.  How do you know that you are still at 1.5 if you stop monitoring the stability of the process?
    Regards,
    Jim Shelor

    0
    #156989

    Mikel
    Member

    Do you think you have to have a control chart to know how your process is performing?

    0
    #156992

    Jim Shelor
    Participant

    Stan,
    To know whether or not it is stable and whether or not I should be applying corrective action, yes.
    Jim Shelor

    0
    #156995

    Mikel
    Member

    Jim,
    It’s about understanding the logic, not plastering charts on everything that moves. You defintiely don’t need charts to know if something is stable or in need of corrective action. Any place I have ever seen with lots of charts did not pay any attention to them. A control chart is a control of the last resort, not the control of choice.

    0
    #156996

    Richard
    Member

    Excellent example!
    I will phrase my response in terms of your example. 
    You began your heatlh process analysis with the goal to become healthy and used your blood pressure as the Key Indicator of health.  The diagnosis you described (doctor giving specification of 135) does not imply that your blood pressure was out of control, just out of specification.  Assuming that you found you blood pressure out of control and out of specification lead you to your improvement process would be the appropriate use for SPC.
    After proper root cause determination, you found your diet to be the Key Driver to your blood pressure and found if you maintian your diet you get the improved results (less variability in blood pressure). 
    At this point, your reaction (being a REAL nerd you would have a Control Plan to go with your SPC) is likely to be different.  While you indicate that your blood pressure still goes out of control, the reaction to his may be very different.  You are meeting the spec (135) the doctor has specified is healthy.  Therefore, the response is dependant on what you want out of the new process.
    Let’s skip ahead to the triathalon….you have continued to improve (assuming that performance in the triathalon is now the goal of the process since you have already established capable health) and now find your process out of control.   Again, your reaction would neeed to be consistent with your goal for the process.
    Since it is implied that triathalon performance is being missed, your reaction is likely to find a root cause different than diet and may find that the improvement needed is not to reduce variation, but to shift the “centerline” to get to your new goal.
    If your goal remains “good heatlh” defined by the specification from the doctor, my experience would be to address the reaction plan accordingly; which you implicitly did by saying that you would recheck pressure, wait a few minutes and recheck, etc.
    I hope this is clear, understandable, and helps with your decision.
    Richard

    0
    #156997

    Jim Shelor
    Participant

    Stan,
    I disagree.  And I believe Deming and Shewhart disagree as well.
    Deming said “You get what you inspect, not what you expect”.
    Jim Shelor

    0
    #157001

    Mikel
    Member

    Jim,
    They are both dead, so that is a theoretical discussion at best. I only do those with a bottle of Jack Daniels around.
    I have read all of Deming’s writings, seen him multiple times, and talked with him for extended periods twice. I believe you are wrong. He used control charts to demonstrate the thinking process but in no way advocated putting control charts everywhere and forever. I think you will find Wheeler does the same thing. He uses control charts all the time to demonstrate and understand but in no way does he believe the control chart is the primary control method.
    Go look at the best example of Deming’s teachings – Toyota and Nippondenso. You will be hard pressed to find a control chart in the place – but you will find the thinking everywhere.
    One of the biggest fallacies of the way SS has evolved is the overuse of control charts and the underuse of thinking. The original poster is well on his way to thinking, he is asking the right questions and taking the time to understand.

    0
    #157003

    Jim Shelor
    Participant

    Stan,
    I do not advocate posting control charts everywhere either.
    But to stop control charting a process just because it once told you it had a Ppk of 1.5 is a hugh mistake.
    Jim Shelor

    0
    #157007

    Mikel
    Member

    I’ve been doing it for at least 17 years. There is no economic justification for charting those things known not to be an issue and there are other simpler controls to replace the chart.

    0
    #157008

    Jim Shelor
    Participant

    Fine.
    I have been doing this for 17 years and I firmly believe in control charting.
    I guess we can agree to disagree.
    Jim Shelor

    0
    #157010

    Jim Shelor
    Participant

    Stan,
    If you would care to discuss the details of your methods I am always willing to learn and discuss with an open mind.
    j.shelor@verizon.net
    Best regards
    Jim Shelor

    0
    #157015

    Bill C
    Participant

    Jim,
    Would you at least refrain from tightening limits when your process capabilty is over 2.0? At some point, you can reduce variation so much that 3-sigma violations are not a concern on the grand scheme of things.
     

    0
    #157018

    Jim Shelor
    Participant

    Bill,
    I am wondering why I would work very hard to get my process to operate at a Ppk of > 2, but monitor it at the control limits for a Ppk of 2.0.
    If I have worked that hard to improve the process, I prefer to monitor it at the level I attained.
    There was a reason for improving to a level of Ppk = 3, so I don’t want to wait until the Ppk has reduced to 2 before I  get a warning that the process is no longer in control.
    I may monitor it at a reduced frequency, but I am going to monitor it and at the control limits for the capability, performance, and stability I am looking for.
    Have a great day,
    Jim Shelor

    0
    #157023

    Bill C
    Participant

    Jim,
    It is a matter of resources and the potential for overadjustment.
    I would opt to keep the SPC in place with wide limits and have the process owner monitor it daily.  I might also switch to a different set of SPC rule for the chart, like asending / descend trends.
    There is a point where “over-reaction” can be a problem for the bottom line.

    0
    #157029

    Jim Shelor
    Participant

    Bill,
    It takes the same resources to perform monitoring on a chart setup with normal limits as it does for monitoring on a chart setup for artifically wide control limits.
    As far as getting more false alarms and overreacting, if you are monitoring your process with control limits appropriate to a Ppk of 2 and your process is operating at a Ppk of 2, the probability is the same that you will see an out of control indicator as if you are monitoring a process operating at a Ppk of 3 with control limits appropriate for the process at a Ppk of 3.
    The potential for over-reaction is the same for a process operating at 2 with control limits for 2 as it is for a process of 3 operating with control limits for 3.
    If you are going to monitor, the economic impact for the monitoring is a wash.
    The only thing you accomplish by monitoring using artifically wide control limits is a delay in receiving the indication that your process may be out of control.  You can have many points above the UCL or below the LCL for the limits for 3 that do not indicate such on a control chart with artifically wide (2) control limits.
    If a Ppk of 3 is where you want to operate, why would you set your chart up to hide the indications of an out of control condition?
    If operating the process at a Ppk of 2 is adequate, then don’t spend the money it takes to get to 3, only to than monitor at the limits for 2.
    If you really need the process to operate at a Ppk of 3, then you must monitor with limits for 3 or you cannot be sure you are actually operating at 3.
    Working for a Ppk of 3 and then monitoring with false control limits equivalent to a Ppk of 2 simply makes no sense to me.
    Best regards,
    Jim Shelor

    0
    #157030

    GrayR
    Participant

    Bill,
    You are only looking at half the picture.  You have a good understanding about VOC and VOP, but capability is only one of the reasons that you want to reduce variation.  Capability is not the only characteristic that involves cost and an economic decisions.  You can have a process with 1.5 or 3.0 Cpk, and the process can still involve a lot of unnecessary cost.  I am familiar with processes where we have a 1.50 Cpk, and the process result still costs us a lot of $.  In this case, we don’t eliminate the use of control charts to understand variation because there is more opportunity. The VOC is needed to define the capability, but just because it is the VOC doesn’t mean that it identifies ALL of the costs associated with the process.
    Missing from this entire discussion is an understanding that someone always ‘pays’ for process variation in some manner — this was Taguchi’s role in the development of quality philosophy.  As long you have variation (and there will always be variation in every process — if you can’t find it, then you aren’t measuring fine enough), there is an associated cost.  Now the cost may be small in relation to other costs, but it is there. 
    The question about ‘when are there diminishing returns’? — it is NOT based entirely on process capability (Cpk).  Even with very high Cpk’s, reducing variation (including the need for using control charts) may lead to further cost reduction and/or more product opportunity (e.g., higher profits).

    0
    #157031

    GrayR
    Participant

    Bill,
    You are only looking at half the picture.  You have a good understanding about VOC and VOP, but capability is only one of the reasons that you want to reduce variation.  Capability is not the only characteristic that involves cost and an economic decisions.  You can have a process with 1.5 or 3.0 Cpk, and the process can still involve a lot of unnecessary cost.  I am familiar with processes where we have a 1.50 Cpk, and the process result still costs us a lot of $.  In this case, we don’t eliminate the use of control charts to understand variation because there is more opportunity. The VOC is needed to define the capability, but just because it is the VOC doesn’t mean that it identifies ALL of the costs associated with the process.
    Missing from this entire discussion is an understanding that someone always ‘pays’ for process variation in some manner — this was Taguchi’s role in the development of quality philosophy.  As long you have variation (and there will always be variation in every process — if you can’t find it, then you aren’t measuring fine enough), there is an associated cost.  Now the cost may be small in relation to other costs, but it is there. 
    The question about ‘when are there diminishing returns’? — it is NOT based entirely on process capability (Cpk).  Even with very high Cpk’s, reducing variation (including the need for using control charts) may lead to further cost reduction and/or more product opportunity (e.g., higher profits).

    0
    #157044

    Bill C
    Participant

    Jim,
    You obviously have a great passion for SPC. Your insights are appreciated. Just keep in mind that there is nothing magic about 3-sigma limits. I could just as easily have used 3.5 sigma limits from the beginning if I was satisfied with that risk level and the corresponding average run length. Many people use 3 sigma limits, but do not understand how OC curves and ARL apply to their charts.
    In Grant & Leavenworth, it shows possible objectives for control charts. The one I want to emphasize is: ” To provide a basis for current decisions during production as to when to hunt for causes of variation and take action intended to correct them, and when to leave the process alone. This is nearly always one of the purposes of any control chart for variables.”
    I strongly believe that if you do enough variation reduction, it is OK to refrain from tightening your limits at some point. Call it phase 5 or whatever you like. Review the alpha and beta risks for the chart based on the 4 sigma limits (or whatever they happen to be), and summarize the risks.
    Another good quote from Grant & Leavenworth is:
    “The idea behind modified control limits is to permit limited shifts in process averages in cases where the difference between the two specification limits is substantially greater than the spread of a controlled process. This is intended to avoid the cost of stopping production to hunt for trouble whenever the shifts in average are not sufficient to cause the production of non-conforming product.”
    This is the same concept as rejecting the null hypothesis in t-testing, but not really caring because the different between groups is not PRACTICALLY significant.
    Best regards,
    Bill C

    0
    #157058

    Jim Shelor
    Participant

    Bill,
    So far in this discussion, the units of the discussion have been Cpk and monitoring a process at the limits appropriate for a Cpk of 2 when the actual Cpk is 3.  That is a hugh difference from what you are talking about now.
    In your last post, you introduced the idea of setting your control limits to 4-sigma rather than 3-sigma to reduce the probability of out of control indications that delay processing and cost money to investigate but have essentially no chance of producing an out of specification product.
    Of course I agree that is a good practice.  That is commonly done to achieve the purpose you stated.  But monitoring with limits of 4 rather than 3 sigma is considerably different than monitoring at limits appropriate for a Cpk of 2 then your Cpk is actually 3 (a difference of 3 sigma).
    We seem to have broken the first rule of communication here.  Always make sure you are talking in the same terms.
    Best regards,
    Jim Shelor

    0
    #157077

    Bill C
    Participant

    Jim,
    There is absolutely no difference in my context.
    Original Question:
    ———————–
    If all goes to plan with SPC, you will reduce variation and maintain a process that is on target. As you improve the capability of the process and your cpk climbs to 2 and beyond, is there a point at which you lock in your control limits rather than tightening them?
    —————————-
    If I reduce variation, and in phase 5  I decide not to tighten my limits due the reasons I found in Grant & Leavenworth, that is equivalent to using limits that are beyond 3 sigma.  If I just lock my limits where they are, they will be somewhere beyond 3 sigma The whole idea behind using “modified” control limits is that your capabilty has improved (ie cpk greater than 2) such that you dont need to use the traditional 3-sigma limits.
    This has been the theme of my questions & comments all along.

    0
    #157080

    Bill C
    Participant

    GrayR,
    You are right. I painted half the picture and I am seeking inputs on when to refrain from tightening limits any further. Cpk is a very large part of this decision. I agree that there are some cases where variation reduction results in further cost savings, but there is a point at which you interrupt the steady state flow of the process by shutting it down all the time, which will end up doing the reverse….it will make your variation go up.
    I really have to disagree with the point that someone always pays for variation. Just think of a plane landing on a runway. You have a left spec limit and a right spec limit. If the pilots ability to land is highly capable and he/she is centered with a +/- 3 sigma of 1 foot, should we then tighten our process requirements and try to control it to a 3-sigma spread of +/- 4 inches? There is no need to reduce variation even further. There would be costs for reducing variation, which is usually the case.

    0
    #157082

    Fake Gary Alert
    Participant

    What is  the value of  Pp if  Ppk becomes = 3 ??

    0
    #157083

    Jim Shelor
    Participant

    Bill,
    Fine.
    I alone am responsible for misunderstanding all the questions and all the answers.
    Once I realized that the question was 4 sigma limits instead of 3 sigma limits, rather than limits appropriate to a CPK of 2 when the actual CPK was 3, I agreed that using 4 sigma limits was not an issue.
    The use of 4 sigma limits rather than 3 sigma limits is common.  When that should be applied – it depends.
    Jim Shelor

    0
    #157084

    Jim Shelor
    Participant

    >= 3

    0
    #157092

    GrayR
    Participant

    Bill C.
    Yes there is a cost associated even with your very hypothetical example.  You missed thinking about the landing process as a system.  The ‘payer’ in this case is not the pilot, airline or the airplane manufacturer, but the airport.  Every 1″ of runway width x the length of all of the landing runways has a cost associated with landing variability. In your case, you cost would be 4″ x length x $/inch width — I think it would probably be $ millions for all of the runways.  Even if you take your example to the decimal point, then the cost would be reduced comparatively — still a cost.
    The reality of the situation is that the cost of an airport runway involves variability related to individual landings and also variability in airplane types (wide vs. narrow), and also possible variability in future airplane design (do you think anybody is rebuilding runway widths to handle the variability introduced with the new Airbus?).
    But you also painted another half picture — I said there was a cost . . . I said that you would want to understand the cost . . . however, I did not say that you should focus on reducing variation in flys..t.

    0
    #157094

    lin
    Participant

    Lots of replies already.  I don’t see the most obvious thing I would do.  If you have a high Cpk and your process is really in statistical control – consistent and predictable – reduce your frequency of sampling.  Whether you use a control chart to continue monitoring is your choice.  I would.  It was important to chart at one time.  I would keep it going, just sample less frequency.  I believe it always best to look at data as a time series chart – with limits wherever possible.  It is the way process communicates with you.  The correct interpretation will tell  you if something has changed or if everything remains the same. 

    0
    #157095

    Bill C
    Participant

    GrayR,
    Very insightful again, but……
    The specs are remaining the same. The width of the runway is very adequate to handle today’s process. It does not make sense to make the pilot hit the target within 3 sigma if this amounts to some ridiculous amount like +/- 4 inches. That is the whole basis of this thread.
    You are acually painting the wrong picture in your cost discussion. The runway is staying the same width. No one said anything about making it wider. The cost of hitting the target is a function of the plane design, pilots capabilties, etc. Who said anything about widening the spec? I understand the concept of a system very well, that is why I realize the costliness of reducing variation unecessarily.
    Didn’t you say that someone always ‘pays’ for process variation in some manner? Didn’t you also say that reducing variation (including the need for using control charts) may lead to further cost reduction? The reality is that you can pay excessively to squeeze out more variability, and the ROI is not there. (And the customer won’t even care)

    0
    #157096

    GrayR
    Participant

    Bill C.
    There is another way to understand the costs associated with variability:
    1.  Using an hypothetical airplane/airport with only one design factor needed for landing strip width — variability in pilot’s ability.  The landing gear width is 60′. Under the zero variability condition, you would only need to design the runway at 60′ wide (why would you design wider if there is no variability?, in fact you could probably eliminate the concrete in the center)  The cost for this runway is $6 million.
    2.  Using your example of +/-4″ at three sigma, you now have to account for the only variable — pilot’s ability.  Unfortunately, +/- 3 sigma probably isn’t good enough for an airport (especially O’Hare), so you may want to go +/-12 or 20 sigma.  In this case, you add +/-3′ (10%)of width onto the runway.  The added cost is $0.6 million.  The added cost is associated only with variability introduced by the pilot’s ability.
    3. BillC airline represents 99% of the flights into the airport, but Stan’s airline also flies in for the remaining 1%.  Stan’s airline still averages 60′, but can only maintain +/-15′ (at 12 sigma).  Now, if you design the airport for this scenario, you need to add 50% onto the width, or an additional $3 million onto the budget because of something that happens much, much less than 1% of the time (frequency of 1% x 12 sigma) and the COPQ in this case is astronomical.  The airport is impacted by the extreme variability — and that is the cost to the system.  Add another view — BillC and Stan airlines have to absorb the cost of the runways in landing fees — so, BillC is paying for the variation, just not their own.
    In this example, it doesn’t matter that all three processes average 60′, it is variability that is adding all of the higher costs. If you budget only for the process average and ignore the variability in the system, you will grossly undersestimate the costs. If you think that everyone understands this, then you haven’t been around enough engineers, accountants and managers that everyday use an average to calculate budgets and costs without understanding process variability.

    0
    #157097

    GrayR
    Participant

    Your example identified the ridiculous amount of +/- 4 inches, so don’t try to turn to back on me.  I was explaining where the costs were associated.
    To answer your questions:
    Yes — there is a cost.
    Yes — there may be a benefit.
    And yes, the benefit may not outweigh the cost to achieve, in other words, there would be no benefit to tearing up the runways because we now have improved capability by 8″.  But I think if YOU were in charge of the next runway construction, I think you would probably take advantage of reducing runway width by 8″, and I don’t think the savings would only be a couple pesos . . .

    0
    #157116

    Bill C
    Participant

    GrayR,
    Now we are talking! I agree that I would reduce the width of the next runway because my process capability is so good. Shave 8 inches off the width to save money! It is a good move to tighten the specs if you process is highly capable and if there is some actual benefit like cost savings or customer satisfaction.
    Take for example a multilayer lamination process. If your process is extremely tight, why not guaranteee a tighter thickness spec if the customer needs it? I decided to move away from the runway scenario!

    0
Viewing 41 posts - 1 through 41 (of 41 total)

The forum ‘General’ is closed to new topics and replies.