Forum Replies Created
Forum Replies Created

AuthorPosts

November 23, 2020 at 9:23 am #250995
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Marc68
You could consider Preventative Maintenance for analytical instruments in the pharmaceutical industry to be a tradeoff between Risk and Cost. The statistical analysis comes in to play as a way to balance Risk and Cost.
Taking these to an extreme – if Risk was “high” and PM/Calibration was “free” (cost was “extremely low”), if three was high risk and PM / Calibration took zero time and cost no money, then you could do PM/Calibration incredibly frequently – daily, hourly, whatever.
If Risk was pretty much negligible and PM/Calibration was incredibly expensive, then you might move the PM / Calibration frequency to years and years – perhaps the 2 years in your question.
So, my suggestion would be to first create a financial model for the cost and time required. This should be fairly simple.
Then, create a model for the risks – there are two types of risks, Alpha Risk (the “Producer’s Risk”, the risk of overreacting, in this case PM/Calibrating too often) and Beta Risk (the “Consumer’s Risk”, the risk of underreacting, in this case the risk of missing a shift in calibration that impacts your customers).
Your MSA (Measurement System Analysis) could provide information for the risk model.
Lastly, you could combine the financial model and the risk model to propose a frequency of PM/ Calibration.
Best regards,
Eric
0November 23, 2020 at 9:10 am #250994
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Anonymous,
Yes, the best way to learn is to learn by doing – applying your new Six Sigma skills.
You could approach some managers at the local manufacturing plants, supermarkets, warehouses in your country and offer to work with them on a voluntary basis.
I can try to mentor you – you can contact me at [email protected]
Best regards,
Eric
0June 30, 2020 at 10:19 am #248690
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Buenos diaz, Carlos! Welcome to iSixSigma – I hope you find lots of interesting information here and that you get a chance to exchange ideas with others on similar and diverging journeys!
0June 30, 2020 at 7:42 am #248689
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Techyanuj,
Here is an isixsigma.com summary that you might find helpful:
https://www.isixsigma.com/newtosixsigma/designforsixsigmadfss/dmaicversusdmadv/
Best regards,
Eric
1December 30, 2012 at 6:32 pm #194517
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Elena,
I suggest you click on “terms” and get rid of at least one of the highest order interactions, like “ABC” .
When you reduce the terms by taking a term out of the analysis, the degrees of freedom available for the error terms will increase and you should be able to see p values.
Also, you might want to click on the graphs button and select the Pareto diagram of effects
0December 30, 2012 at 4:29 am #194514
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Joppie,
The equation, r = +/ 1.96 sqrt(STDev), seems to use 1.96 for the 95% confidence interval that should include the mean measurement value. The term “repeatability” actually refers to the standard deviation itself.
There are two “sample sizes” involved in estimating repeatability: the number of units to be repeatedly tested, and the number of times you repeat the testing for each unit.
Standard practice is based on the Automotive Industry Action Group (AIAG), and often uses about 10 units or samples that are tested 2 or 3 times repeatedly.
Don Wheeler was very critical of the AIAG approach,so you might want to read his article, “An Honest Gauge R&R Study”, http://spcpress.com/pdf/DJW189.pdf
Best regards,
Eric0December 30, 2012 at 4:19 am #194513
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.I skipped a step in sharing the search…
Here is the reference that the anonymous comment said was in error:Schmidt, S.R. and R.G. Launsby, Understanding Industrial Designed Experiments, Air Academy Press, Colorado Springs, CO, 1997.
You could also get that from amazon.com:
http://www.amazon.com/UnderstandingIndustrialDesignedExperiments4th/dp/1880156032So, I guess you could also check out that book, and see if the comment about an error was itself in error.
Alternatively – since Schmidt apparently developed the formula for Air Academy, I guess you could contact Air Academy itself and ask for their help on this:
http://www.airacad.com/ 1 (800) 7481277
0December 30, 2012 at 4:09 am #194512
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Luis,
Well, I checked out the articles from Quality Progress in 1997
( http://asq.org/qualityprogress/pastissues/index.html?fromYYYY=1997 ) and did not find the referenced article.So, continuing on as a nerdy Sherlock Holmes, I found this comment:
QUALITY DIGEST  SAT, 06/05/2004
Hi, I noticed that you reference Schmidt & Launsby 1997 on pg 189 of the above text. I believe the reference should be Schmidt et al’s Basic Stat’s book and not Understanding Industrial DOE?I think this is the Basic Statistics book that is mentioned:
Having said that…I have to admit, I still don’t understand why people would use an equation to approximately estimate a sigma level using square roots and natural logarithms, which a person would enter into Excel, when Excel already has functions that will make the calculation directly. I guess I must be missing something.
Anyway, have a very Happy New Year!
Best regards,
Eric0December 26, 2012 at 4:24 pm #194505
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.I’ve recently seen some good results combining Value Stream Mapping and Discrete Event Simulation using Simul8.
0December 26, 2012 at 4:20 pm #194504
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello, Luis,
I have never used that formula, preferring to use Excel’s builtin functions for cumulative normal distributions…but this is referred to as:
Schmidt S. and Launsby, Quality Progress, 1997Best regards,
Eric0December 25, 2012 at 1:46 pm #194502
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Cathy,
There is no accrediting organization for six sigma / lean six sigma training providers.
The relevant question might be: after you are certified, will your certification be accepted and will you be considered credible as a certified green belt or black belt or whatever.
Now, I’m a certified Black Belt and Master Black Belt, from the company that originated Six Sigma, and I still have had to answer a series of technical questions in interviews before I was considered credible…so please don’t be surprised if you need to quell doubts and uncertainty about your abilities, no matter where and who and how you are certified.
Good luck in your journey! I believe you will find it worthwhile…and have a very Merry Christmas!
Best regards,
Eric0December 25, 2012 at 1:37 pm #194501
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Valid,
What are you trying to mistakeproof: the tendency to have missing information in the first place, prompting the follow up…or something in the follow up procedure?
Best regards,
Eric0August 12, 2012 at 1:04 pm #193901
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Roger,
It sounds like you were given a set of individual cycle time measurements…and they want you to estimate the cycle time 80% of the time…Which is a rather vague request.
If they wanted you to tell them a value that 80% of the cycle time values are less than, then you would want to sort the data, and find the value that 80% are less than.
If they wanted you to assume that cycle times follow a normal distribution, you would use this sample of cycle times to come up with a sample mean and sample standard deviation, and use the normal distribution to estimate the value that 80% of the cycle times are less than.0August 12, 2012 at 12:59 pm #193900
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Miguel,
Can you clarify your request a bit?
LSC and USC – I assume you mean Lower Specl Limit and Upper Spec Limit?
Are you measuring yield for a continuous parameter? If so, what are the limits for that continuous parameter?
Or, is Yield itself the parameter you want to set spec limits for?0July 6, 2012 at 5:50 pm #193689
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Thanks, John! You have an amazing breadth and depth of experience with developing great Excel addins and tools for Six Sigma and statistical efforts!
Best regards,
Eric0March 13, 2012 at 10:05 pm #192575
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Several aspects, largely financial:
1) According to research, it takes 10 to 20 times as much investment to gain a new customer as to retain an existing customer – it makes business sense to do your best to retain the existing customer base. Research also shows that the major cause of customer defections is loss of confidence in the quality/reliability of the products.
2) Pulling historical information of the stock price or market valuation before and after product recalls or similar events, and coming up with the average loss of stock price/valuation per incident….and then discussing the possibility of permanent loss of the value of the brand name.
3) Contrasting the negative impact of quality/reliability issues with the favorable impact of strong and visible quality and reliability focus, as with some Japanese car companies.
4) Invoking some concepts from Crosby’s “Quality is Free”.
5) Helping the executive(s) to see the impact they will have, and the legacy they will leave behind, if they champion and drive a highly visible, highly successful quality improvement program.Best regards,
Eric0March 7, 2012 at 7:50 pm #192491
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Mike,
Oops!! I missed the sarcasm…I guess you just come across as such a nice guy, it didn’t occur to me that you were using sarcasm. My bad! :)
Best regards,
Eric0March 7, 2012 at 6:31 pm #192488
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Mike,
You are correct…
Thomas Edison could not have been a TRIZ user. Edison passed away in 1931. TRIZ was invented in Russia, starting around 1946… shortly thereafter, Josef Stalin sent the inventor of TRIZ (Dr. Altschuller) to Siberia….
Dr. Altschuller’s research of patents, that led to the TRIZ approach, would very likely have included many of Edison’s patents.
Best regards,
Eric0March 6, 2012 at 4:04 am #192452
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Ah…I finally was able to log in under my former username – Yea!
Samantha – I think MBBinWi was right about Concurrent Engineering – you can research it, but it likely won’t fit in well with the overall theme of your research paper.
The overlap between DFSS and Systems Engineering starts at the front end – understanding the customer requirements, transforming those customer requirements into measurable requirements, and flowing them down through the system. DFSS methods can help the development team have confidence that the measurable requirements will be met over the range of uncertainties due to variations in manufacturing and usage or application. If you read about Systems Engineering, and then read about DFSS, you can build on this brief summary and develop a clear and strong paper on the topic.
Best regards,
Eric0December 27, 2011 at 11:38 pm #191717
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Colin,
I’d suggest trying Excel.
1) Copy and paste the columns of numbers into Excel
2) You can use the Find/Replace command to replace all instances of “+” (perhaps with the space before and after) with nothing (“”)
3) You can use the Find/Replace command to replace all instances of ” (” [space followed by left parenthesis] with nothing “”, and similarly replace all instances of “) ” [right parenthesis followed by space] with nothing “”
4) Review the columns to see if you must make additional corrections
5) You can try sorting each column in ascending or descending order to see the matches and gaps.
6) You can copy/paste into Minitab…but you may need to convert the type of one of the columns from text type to numeric type.I hope this helps!
Happy New Year!
Best regards,
Eric0November 13, 2011 at 1:16 pm #191708
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Mike,
Welcome back!
Thanks for already setting an atmosphere of responsiveness, of asking for the Voice of the Customer!
Best regards,
Eric0June 5, 2011 at 2:05 pm #191549
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Gripper,
Thanks for the challenging request.
Some suggestions:
1) First, do your best to clarify with the executives / management what you are really trying to optimize, and how to measure it. How can you measure “effectiveness”?
“Costs” sounds clear, but how will they know that you have optimized “costs”?
Mostly, the type of question you might ask of the primary driver and his/her finance or accounting manager is something like, “One year or two years from now, what financial metrics would you look at to determine if we were successful in this effort?”2) Try to obtain a clear idea what the cross functional teams are intended to accomplish… Are these proposed cross functional teams temporary, to solve certain problems or accomplish certain short term goals, or are they groupings for a new organizational structure on a longer term basis?
3) If you can, start from the objectives /goals,and work backwards – what types of resources are necessary and sufficient to achieve each objective / goal. Determine if these resources are needed full time or part time for achieving each objective /goal.
4) Create a matrix of resource types and objectives. Include the part time/full time aspect in each cell of the matrix.
5a) If you see some common types of resources that are needed part time in support of several objectives, perhaps they should become a centrallized group that is called upon.
5b) If you see some common types of resources that are needed full time in support of an objective, then they should probably be considered as the core of a cross functional team.
6) Determine and summarize the resource gaps, and the situations where there aren’t enough resources to support the needs, and provide this summary to the executive along with your suggestions on the organizational structure.
Best regards,
Eric0February 24, 2011 at 10:06 am #191293
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Michelle,
One way to handle this is to extract the time for each sequence of 1’s bookcased by zeros’ before and after. So, for the short set of data you provided, if I understand correctly, it seems that the equipment started working sometime between 9:03 and 9:06 AM, and stopped cutting material sometime between 9:15 AM and 9:18 AM.
In the reliability evaluation world, this would be referred to as interval censored data.
In terms of your efforts, this sequence shows a processing time that is most likely around 12 minutes, but with a range between 9 minutes and 15 minutes.
Can you extract these numbers for the full set of data you have?
If so, you probably estimate the overall mean from the data, but estimating the standard deviation would probably be best done using a technique from the field of digital signal processing / digital filtering – a method called dithering. If the data is normally distributed, those steps would probably be sufficient. If the data is not normally distributed, you would want to try to identify the type of distribution – I’d probably suggesting using Crystal Ball (an Excel addin) for that purpose.
How about this – if you can extract the most likely times from the data following the example I did at the start of this, and if you end up with at least 50 values that vary a fair amount, you can email me that data, and I’ll show you how to analyze it stepbystep.
Best regards,
Eric Maass
Master Black Belt and Senior Program Manager, DFSS / DRM
Medtronic
[email protected]0January 17, 2011 at 1:30 pm #191152
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Mamoon, and welcome to the iSixSigma Forum!
Isn’t it great to be open to learning new things? I hope you learn many new approaches and ideas and perspectives that help you grow and progress.
Best regards,
Eric0January 17, 2011 at 1:25 pm #191151
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Karma,
One of the main intentions of Statistical Control Charts is to detect the difference between common cause variation and special cause varation. As part of that, the control limits and rules are intended to prevent overreaction. In fact, the rules and control limits for many control charts have a relatively low alpha risk (the risk of overreaction) and a corresponding beta risk (the risk of missing a drift) that many find surprisingly high.
An alternative control chart you could consider would be to use an EWMA chart (Exponentially Weighted Moving Average). EWMA uses an approach that is related to time series modeling, in which you use the historical trend to forecast the next point – and see if the process is likely to drift beyond the control limits in the near future.
Here are a few links to learn more about EWMA as a possible alternative:
http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/ewma.htm
http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc324.htm
http://en.wikipedia.org/wiki/EWMA_chart
http://www.qualityamerica.com/knowledgecente/knowctrWhen_to_Use_an_EWMA_Chart.htm
Best regards,
Eric Maass0January 13, 2011 at 3:00 am #191127
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Karma,
Well, one alternative that was developed for group decision making is Accord’s Robust Decision software package:
http://www.robustdecisions.com/decisionmakingsoftware/
Best regards,
EricP.S. No, I”m not a salesman for Accord….they demonstrated their product to me several years ago, and I’ve kept it in the back of my mind for situations where group decision making might require a tool beyond the Pugh Matrix approach.
0January 1, 2011 at 11:58 am #191102
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi rkoganti,
Some or all of the simulation software programs you mentioned are for Discrete Event Simulation, which is a bit different from Monte Carlo Simulation.
In Discrete Event Simulation (D.E.S.), you model the process as individual steps linked together, set up a statistical distribution for the arrival rate (how often new material enters the process flow) and statistical distributions for each of the steps in the process. If you enter a capacity limit on a piece of equipment, then the simulation will show material stacking up as WIP before that equipment. You can also model the equipment occasionally going down, either intentionally (scheduled equipment maintenance) or unintentionally (equipment failure, time to repair)
Of the software packages you mentioned, I’ve heard favorable things about Simul8 in terms of both its capabilities and its cost. Here is a website with some information on some of these D.E.S. programs:
http://www.idsia.ch/~andrea/sim/simvis.htmlin Monte Carlo Simulation, you set up an equation or series of equations intended to represent the process or situation you would like to model, and you set up statistical distributions for some or all of the inputs into the equation(s). Monte Carlo simulation is more general purpose that D.E.S., but isn’t as easily applied for situations like a process with one or more equipment constraints.
Happy New Year!
Best regards,
Eric Maass0August 14, 2010 at 6:53 pm #190622
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Some ideas:
1) Can you switch your frequency from monthly to weekly? Almost magically,
that could change your number of values from 7 to more than 25.2) If you must stay monthly, can you go back further in time – so, about 2 years?
3) If you must stay monthly and must stay with 7 values, you can come up with a baseline number – realizing that it involves considerable uncertainty. With 7 data points, you won’t have much clarity as to the shape of the distribution (normal, lognormal, ….), but you can come up with confidence intervals for the mean and standard deviation. If you use Minitab or SigmaXL or such, just use the Graphical Summary option and it will provide those confidence intervals that you can use as your baseline.
I hope you find this helpful.
Best regards,
Eric Maass, PhD
MBB0January 23, 2010 at 2:04 pm #188613
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Mark,Remi is right. Alternatively, if you are more comfortable with
probability theory, you could estimate the yield for
each of the 6 characteristics as [1e6 – 600 ppm]*1e
6 – .9994, and – like Remi said – if yo assume they
are independent, you could multiply them…or, since
each of the 6 characteristics have 600 ppm, you
would have
[(1e6 – 600ppm)*1e6]^6 = .9994^6 = .9964 which
corresponds to 3600 ppm (equivalent to Remi’s answer
of 600 ppm x 6). There is a slight difference in
these two calculations a couple decimal points
further out…but the bigger question is whether the
6 types of failures due to different characteristics
are independent.Best regards,
Eric0January 23, 2010 at 1:48 pm #188612
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Madhukar I don’t know if this is the problem in your specific
case, but it is often difficult to get consistent
results with Multiple Linear Regression when the
“independent variables” are not independent – that
is, when they are correlated. Can you get the
matrix of correlation coefficients among the 6
“independent” input parameters? Best regards,
Eric Maass0September 26, 2009 at 12:34 pm #185751
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Smita,Yield can approach a continuous distribution if the
set of units tested is large enough. For example, if
you were analyzing yield among 5 units, then the
only possible values would be 0%, 20%, 40%, 60%, 80%
or 100%. In this instance, you would treat yield as
a discrete variable. When I’ve analyzed yield data, each yield value has
been among hundreds or thousands of units, and I’ve
been able to analyze yield as a continuous variable.
In those cases, I’ve found that yield tends to
follow a Beta distribution, which is bounded by 0%
and 100%.Best regards,
Eric0September 20, 2009 at 12:59 pm #185555
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Eugene,BTDT gave you good advice, but let me expand on it
to help you when you meet with the chemist.If your mixture contains a buffer,then the
dependence of the pH on the amount of chemicals like
acids or bases in the mixture will be very non
linear. Please visit the url to see the graph below.
http://www.files.chem.vt.edu/chem
ed/titration/graphics/titrationstrongacid35ml.gifYou might be able to model the distribution of pH
using Monte Carlo simulation, if you get the
titration graph as BTDT described from the chemist
(perhaps similar to the graph in the url above), and
if you make some assumptions about the distributions
of the quantity and concentrations of the
ingredients to the mixture. Best regards,
Eric0May 12, 2009 at 2:12 am #184014
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.DC,
If you can get the sample size up a bit, I think you can optimize using Binary Logistic Regression analysis. You can think of it as moving from 0/1 that you’re seeing now, to analyzing the probability of having a defect, from 0 to 100%. I think you can stay with the CCD (Central Composite Design) for the response surface design, but the analysis will be a bit more involved. You can email me if you’d like some help with trying this approach for the first time: [email protected] .
Best regards,Eric Maass0April 29, 2009 at 11:13 am #183764
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Toby_Mac,
Okay, I’ll try to brainstorm a bit with you…but first, let’s analyze the situation. Poka Yoke / Mistake Proofing methods often involve Contact, Counting, or MotionSequence, and can be manual or automatic. It sounds like cost constraints may limit you to considering manual methods only. The contact approaches include sensing – but the only thing you seem to have to sense is color of the thread. So, if this analysis is correct, then you are looking for a manual Poka Yoke method involving sensing color.
Assuming that’s correct, then one suggestion for the second error, wrong colors in the embroidery machine during set up, would be to have the appropriate color on the spindle or head so that the operator needs to visually match the head color with the thread color when placing the colors in the embroidery machine. You could put a small piece or sample of the correct thread on each spindle or head, and have them put the new thread next to that sample, verifying the color match as part of the set up procedure.In terms of the other cause, the operator entering the wrong thread color number, perhaps you can adopt the “buddy system” that software people use, where a buddy double checks the set up, double checks the right numbers are entered (and perhaps also double checks the right color threads are on the right heads) before the operator begins embroidering.
I hope this helps you and your team (or just you if you’re a team of one, or perhaps others in this forum as a virtual team) start off the brainstorming.
Best regards,Eric MaassMaster Black Belt0April 25, 2009 at 5:23 am #183700
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Jorge,
I went ahead and entered your data into Minitab and ran its Linearity and Bias study (open Minitab or download a 30 day free trial from http://www.minitab.com; enter the data in the worksheet; use the pulldown menu Stat / Quality Tools / Gage Study / Gage Linearity and Bias Study) I saved the output graph as a gif file and uploaded it here:
http://www.geocities.com/poetengineer/linearitygraphforjorge.gif
I think the output is selfexplanatory, but feel free to email me at [email protected] if you have some follow up questions.
Best regards,Eric MaassMaster Black Belt0April 20, 2009 at 9:51 am #183524
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Gary,
Thanks! Very helpful additions and clarifications!
Best regards,Eric0April 19, 2009 at 12:17 pm #183520
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Ashok,
Yes, I can send you examples of the PDiagram, and also an example of an application of Monte Carlo Simulation – please email me at [email protected] and I’ll attach those with my reply.
Have a wonderful weekend!
Best regards,Eric Maass, PhDMaster Black Belt0April 19, 2009 at 1:03 am #183516
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Ashok,
I’m not sure I fully understand your situation or your question, but I’ll try to provide some suggestions, as you have asked.
First off, it sounds like – in your instance – the appropriate target may not be the center of the tolerance band. I would suggest that you establish a new target value. Although you are shooting for the higher side of the tolerance, but this target should not be based on someone’s “gut feel” or “best guess”, but should be calculated based on the change you expect to see over the life cycle based on analysis of historical data (preferably combined with any published equations or models for that change over time).
I would also suggest that you use either a longterm Cpk or a Zvalue in addition to the Taguchi Loss Function. Basically, you will want to know what percent of the distribution will lie outside of the acceptable limits short term and longer term – a Cpk or a Zvalue can help you with this, although I think that you can do even better by developing a mathematical model and using Monte Carlo Simulation.
Lastly, I’d also suggest doing a Pdiagram for this machining process and this parameter. The wear or change over time would be one of the noise factors – but you may discover over important noise factors to consider, and other control factors that might help you improve the results.
Good luck!
Best regards,Eric Maass, PhDMaster Black Belt0April 2, 2009 at 2:45 am #183068
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Newbie,
I recently was asked a similar question, and gave a presentation on DOE with an attribute response to some Motorola engineers in Malaysia.If you’d like a copy of that presentation, please email me at [email protected] and I’ll email you the pdf file.
Best regards,Eric Maass, PhDMaster Black Belt0March 31, 2009 at 11:13 am #182995
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello David,
I’m not sure that I’d refer to it as “pseudo continuous data”, but ordinal data has often been analyzed as if it were continuous.Ordinal data tends to be value with integer values, but where there is a direction to the values. For example, results from surveys using Likert scales: asking someone how happy they are with their current job, with 7 being thrilled beyond belief, 4 being just okay with it, and 1 being down in the dumps and ready to quit…or something like that.Other examples of ordinal data include drops to failure – the number of times you can drop a cell phone, a watch, or an iPod until it stops functioning to your requirements. Obviously, you can’t drop a cell phone 3.64 times…
Anyway, ordinal data is often analyzed using statistical analyses usually appropriate for continuous data – like using a ttest to compare the number of drops to failure using design A or design B, or using ANOVA to compare the Likert survey results for job satisfaction for people on first shift, second shift and third shift.
I hope this helps.
Best regards,Eric Maass, PhDMaster Black Belt0March 16, 2009 at 12:07 am #65254
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Kumar,
Yes, I think Monte Carlo simulation will be very helpful, and I have used it for building models for software projects, including modeling the Software testing processes, improving availability, and meeting latency requirements.
Monte Carlo Simulation was developed as part of the Manhattan Project during World War II. You set up a mathematical model for the situation, perhaps starting with a simple mathematical model if you prefer, and then allow the input variables to vary, and observe and analyze the distribution of the outputs or responses. You can try it out using Crystal Ball, from Oracle: http://www.oracle.com/crystalball/index.html They will give you a 30 day free trial. I have some examples and some introductory material on using Crystal Ball for Monte Carlo Simulation, if you’d like to email me at [email protected] .
Best regards,Eric Maass, PhDMaster Black Belt0February 8, 2009 at 1:08 pm #180830
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Les,
Since you chose to address me in an earlier message, I thought I’d respond to you in this same train of mssages.
Many of us come into this forum with the intention of seeing if we can help other people. In my case, if I see that someone else has adequately answered the question or addressed the issue and I have nothing to add that might be valueadded, I will go on to the next question.
The person who asks the question is generally an intelligent adult, and fully able to determine whether the responses they have received are applicable and helpful, or not.
I enjoy helping other people, and I tend to ignore comments unless I find them value added. I have learned a lot from some of the other people who have posted thoughts and perspectives in this forum.
My one suggestion for you, is that you take a moment to contemplate why you feel the need to judge whether other people are being helpful or not, or to make any other judgements, and whether – in so doing – you are providing valueadded. If it benefits you in some way, great.
If not, perhaps you might want to think about the role you want to play in this forum. It seems that you are highly intelligent, and have a wide ranging background, and I would really enjoy hearing some of your thoughts and ideas. I think you have a lot you can contribute.
Best regards,Eric Maass, PhD and Master Black Belt0February 8, 2009 at 12:22 pm #180828
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Tig,
This is what is referred to as a twosided confidence interval, or setting up for a twosided test.
It could be referred to as a twosided, 90% confidence interval, in that you have 90% confidence that the population parameter lies within the range between .05 on the left side and .05 on the right side.
I hope that helps clarify things for you.
Best regards,Eric Maass, PhDMaster Black Belt0February 7, 2009 at 1:57 pm #180806
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Susan,
You probably know that Lean is based on approaches at Toyota.
About a year ago, Decisioneering held a conference in Denver, and one of the presentations was on the Strategic Deployment approach used at Toyota.
I have a copy of that presentation, if you’d like to email met [email protected] .
Best regards,Eric Maass, PhDDirector and Lead Master Black Belt, DFSSMotorola0February 3, 2009 at 5:15 am #180499
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.John,
I’m not sure I understand what you are working on – in particular, I am confused by the term “census” in your request.
Anyway, at some risk since I don’t really understand your request, I’ll stick my neck out a bit and say that it sounds like a Pareto Chart might be appropriate for breaking down the census disconnect by reason.
It is rather easy to do a Pareto Chart using Excel, even without a template. But, there are templates you can download, including:
http://www.qimacros.com/qiwizard/paretocharttemplate.html
http://www.scribd.com/doc/2629544/03ParetoCharts
or
http://office.microsoft.com/enus/templates/TC060827611033.aspx?pid=CT101443491033
With a quick search, I also found an explanation about how to do it with the command tools:How I do the pareto charts? – ShaneDevenshir
26Jan07 04:35:00
The easiest way to create a pareto chart is to select your data and choose
the command Tools, Data Analysis, Histogram, and check the Pareto and Chart
options after you fill out the top of the screen.The Data Analysis command is part of the Analysis ToolPak, to install that
choose Tools, Addins and check Analysis ToolPak (VBA one not necessary).
—
Cheers,
Shane Devenshire================================================Best regards,Eric Maass, PhDMaster Black Belt0February 3, 2009 at 5:02 am #180498
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Sam,
I have something that might be helpful – not exactly a charter, but a diagram of what is needed, along with a plan of the first 30, 60 and 90 days.
Please feel free to email me at [email protected] .
Best regards,Eric Maass, PhDMaster Black Belt,Motorola, Inc.0January 29, 2009 at 8:24 pm #180348
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.ACJeronimo,
Do you mean VOC (Voice of Customers)?
If so, yes, we have gathered VOC with corporate customers, and it has been associated with mobile telco.
Best regards,Eric0January 29, 2009 at 11:47 am #180313
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Don,
If you have some statistical software package, you might be able to use some features built in for this.
For example, if you have Minitab, you can pull down the menuStat: Reliability/Survival : Test Plans : Accelerated Life Testing
I’d suggest you then click “Help”, and then click on one or two of their examples.
Now, let me provide a few more suggestions:
1) Try to make the Reliability metric continuous or ordinal rather than discrete (Pass/Fail). For example, you could determine the number of cycles to failure, which is an ordinal metric that you might be able to treat analogously to a continuous metric like time to failure.
2) If you use a continuous or ordinal metric, you might be able to use a Weibull Distribution. Your statistical software package might be able to help you identify the distribution – whether it might be better to use a lognormal rather than Weibull distribution, for example.
3) You’ll also need to either have an acceleration factor for your accelerated test, or be able to determine it based on a model for the acceleration (for example, you might use an Arrheniusbased model for accelerated testing by elevated temperature).
If you need more help, you can contact me:[email protected]
Best regards,Eric Maass, PhDLead Master Black Belt, Motorola0January 26, 2009 at 2:23 am #180189
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Vincent,
No, if the Cpk is less than zero, it means that the mean lies outside of the spec window – that is, either the mean is below the Lower Spec Limit, or above the Upper Spec Limit.
Best regards,Eric MaassMaster Black BeltMotorola, Inc0January 20, 2009 at 8:18 am #179939
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello, Troubled Mind,
Since you are using @Risk, I’m guessing that you are using it for Monte Carlo simulation in the Excel environment…am I close?The issue seems to be that you are using a normal distribution – and, since a normal distribution extends in the positive and negative directions, towards infinity in each direction, the Monte Carlo simulation is occasionally selecting negative values for the cost.So, yes, one alternative is to use a distribution that does not go negative, and the Lognormal distribution might be a very good choice.Do you have historical information on costs? If so, perhaps you can use @Risk’s features to help identify what type of distribution best fits the data – it may well be that the lognormal distribution is an excellent choice.
With @Risk, I believe that it simply a matter of selecting the lognormal distribution rather than the normal …but that may require you to estimate values for the mean and standard deviation of a lognormal distribution, which may be different than the mean and standard deviation assuming a normal distribution, depending a bit on how @Risk handles it. It’s been a few years since I used @Risk, so I dont’ fully recall the input screens.
I hope this helps.
Best regards,Eric Maass, PhDMaster Black Belt0January 11, 2009 at 9:42 am #179583
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Ugendhar,
Where have you heard the term “half fractional full factorial design”? The term seems to have an internal contradiction. It’s almost like saying, “I just ate a full half of a pie”.
Anyway, you can be pretty sure that – wherever you heard the term – they meant a half fraction factorial design.
Best regards,Eric0January 10, 2009 at 11:22 pm #179578
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Travis,
Well, in addition to cost savings, how about tracking cycle time for your Lean Six Sigma program? Basically, if you reduce inventory as part of your program, your cycle time should improve, the Lead Time you quote to customers with high confidence should improve, you should be having to rush and expedite less often, and your % On Time Delivery should improve.
So…any subset of those might be very interesting that can paint a picture of how you are doing in satisfying customers in terms of delivery in a reasonable timeframe.
Best regards,Eric MaassMBB, Motorola0December 30, 2008 at 1:21 am #179162
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Laila Tov,
A very helpful and very nice answer – and also, I must say that you chose a most excellent nom de plume…v’erev tov, v’boker tov..
Perhaps Eric Clapton composed his famous song in honor of the night…
Have a very Happy New Year!
Best regards,Eric0December 29, 2008 at 4:37 am #179134
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Gabriel – I don’ t entiende su pregunta. ¿Usted está preguntando lo que usted puede hacer, una vez que usted tiene su certificación del cinturón verde?
Sorry, Gabriel – I don’t understand your question. Are you asking what you can do, once you have your Green Belt certification?
Best regards,Eric Maass0December 29, 2008 at 4:32 am #179133
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Andy,
Thanks – yes, all is well with me and my family.
However, this has been a tough year for a lot of my friends…let’s all hope that 2009 will be a much better year! (And, being an eternal optimist, perhaps — I think it will be!)
Best regards,Eric0December 29, 2008 at 4:30 am #179132
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Thanks, Mike –
Best wishes for a wonderful New Year!
Best regards,Eric Maass0December 26, 2008 at 11:08 pm #179105
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Maer,
From what you have described, I think I would be comfortable in saying that I agree with your perception that the team went a wee bit overboard in time investment on this PFMEA.
Okay, more than a wee bit.
Yes, as with anything – one can spend many, many hours with PFMEA and get to the point where the Return on Investment is dwarfed by the time investment – as seems to be the case here.
I don’t think I have seen any sort of rule on how much time should be spent, and I’m sure that this – as in most things – falls into the category of “it depends”. However, in my experience, there is diminishing return in spending more than 16 hours on a fairly complex FMEA, with most situations requiring less, perhaps 68 hours…so, with 5 people on the team as in your case, between 30 and 80 man hours…100 man hours seems a bit excessive, and is very excessive if the cost considerably outweighed the benefit.
For our Software FMEA’s, we have actually developed a way to estimate the cost/benefit relationship in catching software defects earlier in the process. I’m not sure if you could do the same thing for your situations.
Anyway – the major comment I want to make is that – for your process, this now is sunk cost…the cost has already been paied in 100 hours. Now, the question is – can you benefit from it?
Best regards,Eric Maass, PhDMotorola, Inc0December 26, 2008 at 10:56 pm #179104
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Andy,
Merry Christmas / Happy Holidays to you!!
Best wishes,Eric Maass, PhD0December 26, 2008 at 7:50 am #179084
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi, NoIdea – and Happy Holidays to you!
Actually, while the information you provided is interesting, it is not the information needed to develop a sampling plan. Here are some more relevant questions:
1) Is the inspection destructive in any way? That is, do you damage the tomatoes in the process of inspecting them?
2) What are the practical limitations on inspection? For example, how long does it take to inspect each tomato? I presume that it is impractical to inspect 100% of the tomatoes from 100% of the boxes…but is it?
3) Are you inspecting for defects (good/bad, blemish or no blemish) or measuring something on a continuous scale during inspection (density, color, weight…)?
4) Assuming you are inspecting for defects – what is the historical defect rate, or the standard defect rate that you want to compare your inspection results to?
5) What is the maximum defect rate that you would find acceptable?
6) Are you planning on passing on zero defects, failing on one defect, or some other criteria?
There are tables on sampling plans that can be of help, and there are tools such as the sample size calculator within Minitab. However, you need most of the above information before you can proceed.
Best regards,Eric MaassMotorola, Inchttp://www.motorola.com/mu0December 23, 2008 at 1:05 am #179024
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Maer,
Okay, I’ll reply…
Yes, you can get value from doing a PFMEA on a current process. You can anticipate what can go wrong, in both a theoretical sense and in a very practical sense based upon actual experience. You can base your “Likelihood” values on actual, historical failure data.The risk that I’ve seen is that many people have treated a PFMEA on a current process as something they do, then file away…and in that instance (which happens surprisingly often), I would consider this to be a waste of time.
If, on the other hand, you use the prioritized list of failure modes and causes as a “call for action”, and use and follow up on specific actions such as Poka Yoke methods (mistakeproofing) for the failure modes with high RPN numbers, then you can use PFMEA to drive real process improvement.Now, as for the other part of your question – should other work be used to better derive the value of this activity…well, there are alternatives, but I think that you should follow through on your gut feel…use PFMEA effectively, and follow up, and you will make progress.
One warning, that you probably already realize: there are a lot of methods that are fun for the team – Brainstorming is fun! Developing Fishbone Diagrams is sort of fun! But, FMEA’s…are not fun. They just aren’t.
Give the team lots of breaks, perhaps break the PFMEA session in a series of 2 hour slots…bring in snacks…and go out and celebrate when it’s all over and they’ve done a good job.
Best regards,Eric0December 23, 2008 at 12:56 am #179023
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Robert,
Thank you for your kind remarks!
I have replied to your email address (although I had to resend…I think you may have had an extra “i” in the address).
Best wishes for a Happy Holiday season, and a very Happy New Year!
Best regards,Eric Maass0December 20, 2008 at 1:16 pm #178932
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Vee,
My advice would be to align your interest in Six Sigma with your current responsibilities, in the short term – this could open up all sorts of possibilities. Let me explain:
As the training manager for a 500 employee organization, your goals are probably intended to support the needs to make your organzation very successful.
The leadership team for your organization has probably set up goals for the short term and long term.
Imagine if you were to use the Six Sigma method to align your organization’s goals with the training roadmap, and as your own Six Sigma project?
Imagine if you were able to help your organization succeed in meeting or exceeding expectations for next year…and in the process, develop a clear training and development roadmap that energizes, trains, and recognizes the employees, developing a vibrant and enthusiastic workforce…and get certified based on your own successful Six Sigma deployment strategy?
I see this as an exciting opportunity for you and for your teams.
Best regards,Eric Maass, PhDLead Master Black Belt, DFSSMotorola, Inc0December 20, 2008 at 1:06 pm #178931
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Vimala,
Your educational background is already quite strong, and should open some possible career directions for you already.What Six Sigma might bring is a disciplined approach, or set of approaches, for solving problems, improving business processes, and developing new services. If you feel comfortable that your educational background and experience will enable you to meet these sort of challenges, then perhaps Six Sigma will not help at this time.
If, on the other hand, you feel that you would be more confident in your ability to take on these types of challenges if you had a disciplined problem solving process and a set of methods for gathering customer inputs, making the problems measurable, and gathering and analyzing data to help you drive decisions based on data rather than “gutfeel”, then perhaps Six Sigma will help you succeed and stand out in your career…and that might help with your career growth.
I commend you on thinking about your longer term perspectie shortly after just starting your career, and for asking your question in this forum.
Best regards,Eric Maass, PhDMaster Black Belt,Motorola, Inc0December 20, 2008 at 12:59 pm #178930
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Pramod,
Here are a couple of websites that may answer your question about the Anderson Darling normality test:
http://www.itl.nist.gov/div898/handbook/eda/section3/eda35e.htm
http://en.wikipedia.org/wiki/AndersonDarling_test
Regarding other tests for Normality – here are some websites you might find helpful, that discuss the KolmogorovSmirnov test and the ShapiroWilk test:
http://www.itl.nist.gov/div898/handbook/prc/section2/prc213.htm
http://en.wikipedia.org/wiki/KolmogorovSmirnov_test
http://en.wikipedia.org/wiki/ShapiroWilk_testI think you will find that these three, AndersonDarling, KolmogorovSmirnov and ShapiroWilk tests, will likely be very helpful for what you need.
Best regards,Eric Maass, PhDMaster Black BeltMotorola, Inc
0November 30, 2008 at 2:58 pm #178174
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Sandra,
It sounds like you are starting off a Six Sigma program, and would like to set up the organizational structure to support a successful deployment.I can give you a quick answer here, to start the thinking process, but Six Sigma deployment is more like a chapter in a book (or perhaps a book in itself) than a short message posting.Anyway, Six Sigma Deployment often involves a Sponsor (usually a member of Senior Management who is supportive of the program), a Champion (a leader / manager who is willing to invest time, perhaps on a weekly or monthly basis to meet with the team, review progress and issues, and help knock down barriers that are beyond the Six Sigma team’s ability to handle), one or more Master Black Belts who either solidline or dottedline manage or coach several Black Belts who lead projects.
Again, I can provide more details and insight if you’d like…but it would be helpful to have more information on the organization where Six Sigma is being deployed, to better align the structure with the organization, if you see what I mean.
Best regards,Eric MaassMaster Black Belt, [email protected]0November 30, 2008 at 2:52 pm #178173
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Navin,
Okay, I’m going to go out on a limb here…Yes, it probably would be better to use a discrete control chart, but any control chart – individual or discrete – will show you that this data is not in statistical control.
You have an upward trend of increasing count of Activity Management…. it is obvious from the data, and will be obvious from an Ichart, a cchart, time series forecasting…any approach.
Best regards,Eric Maass0November 30, 2008 at 2:44 pm #178172
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Goshi,
It looks like you did a good job on coming up with 16 potential causes…but I’d suggest that, rather than jump directly to solutions, this might be a good point to gather and analyze data.
Even though the team gave each of these potential causes almost equal weighting – that is their perception, but may or not coincide with the most important root causes for Mislabeling.
I’d suggest that you and your team gather some historical information on these potential causes, and do a Pareto Analysis of the data. If the Pareto rule of thumb holds, most (perhaps about 80%) of the mislabeling issues will be caused by a small number (perhaps about 20%) of the potential causes.
Then, after you have the data and Pareto Analysis summary, you might want to develop corrective actions to deal with these root causes.
You might want to consider using Poka Yoke (Mistake Proofing) methods to prevent Mislabeling; you can look up Poka Yoke on the web, or you can email me and I’ll try to send you some information.
Best regards,Eric Maass, PhDMaster Black Belt, [email protected]0November 26, 2008 at 12:09 pm #178107
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Doug,
My suggestion would be to think of Process Stability and Process Capability as two separate, but related, topics.
If the Xbar R chart shows that your process is in control, then keep it going…
Meanwhile, perhaps you should start an effort on improving the Process Capability:
1) Is the process (using individual data, not Xbars) normally distributed? You can check with a test like AndersonDarling, or using a Normal Probability plot of the data. If it isn’t normal, you have at least 3 alternative approaches to consider – one is to use a transformation to normality (like the BoxCox transformation).
2) If it is normally distributed (or is normal after a transformation) – how does the Cp look? If the Cp is also low, let’s say less than 1, then you have a varaibility problem to focus on first. If your Cp is reasonably high, say 1.5 or so, then you might want to see if you can find a way to recenter the distribution so the mean is in the middle of the spec window.
You say the process runs best at around 1.21.5, and yet the middle of the spec window is 2.0 …so, if the problem really is where your process is centered, and if you would get a very good Cpk by recentering, then I’d suggest you spend some time with the team thinking about why the process runs best between 1.2 and 1.5…
Best regards,
Eric MaassMotorola0November 26, 2008 at 12:01 pm #178106
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Andy,
Thank you very much! Looking forward to a few days off!
Happy…(um…let’s see…no Thanksgiving across the pond, in Merry Olde England…ah, I have it!)
Happy Holidays to you!
Best regards,Eric Maass0May 17, 2008 at 4:21 pm #172056
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Kumar,
To your coding question – yes, the high level is represented by +1, the low level by 1..them middle level is represented by zero.
Your proposed design is a 3^2 design. If you wanted to, you could change this to a Central Composite design with 5 levels for each factor.
Here is the 3^2 design:
x1 x2– —1 1 1 01 +1 0 1 0 0 0 +1+1 1 +1 0+1 +1
Here are 9 runs as a central composite design, setting the star points at +/ 1.5 to provide 5 levels for each of the two factors:
x1 x2– —1 1 1 +1+1 1 +1 +1 0 01.5 0+1.5 00 1.50 + 1.5(although I’d recommend adding two more centerpoints, for 11 runs)
With this design, you could explore the design space a bit more extensively and develop a good polynomial equation that you could use for optimization.
Best regards,Eric0May 16, 2008 at 12:46 pm #172028
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Rick^L,
Yes, I’m Eric C. Maass – I’m glad you enjoyed my website at Research Triangle in Geocities!Best regards,
Eric0May 14, 2008 at 6:56 am #171956
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Dear Nicholas,
That is not at all consistent with my recollections.
Best regards,
Eric MaassMaster Black Belt, Motorola0May 12, 2008 at 4:43 am #171895
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello Leanne,
I’m sorry, but I don’t have such a template – an operational definition is more like…a definition…that doesn’t lend itself to a template.Basically, an operational definition is one of several possible ways to define something – in this case, defining it by telling how you would measure it. For example, if you want to define intelligence, you could specify a certain IQ test (one of the more controversial examples of an operational definition).
Because the result is a measurable definition, it is very closely aligned to the Measure phase of some Six Sigma processes, like DMAIC and DMADV.
I can give you some examples of operational definitions…or, if you’d like, I and some of the others here might try to help you come up with an operational definition if you describe the problem.
Or…maybe someone else has such a template or job aid!
Anyway, I hope you had a pleasant weekend!
Best regards,Eric0May 12, 2008 at 4:38 am #171894
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello JBE,
Yes, Ordinal Logistic Regression sounds like it might be appropriate for your data. Minitab 15 has this approach, and so do some other key statistical software packages.
Please feeel free to call on me if you need any help.
Best regards,Eric [email protected]0May 10, 2008 at 2:27 pm #171881
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Lara,
Well, I’m a bit uncertain as to what you mean by an experiment – is this just comparing the means for two levels of some variable, or are you changing several factors (in what we call a factorial design)?
Assuming that, as a second year psychology student, you are not doing a factorial design, then let me give a quick summary before I try to explain what happens when you change the sample size from 24 to 120:
The tests for comparing two means are generally the ttest or ANOVA. Both of these assume independent samples and individual values that seem to be approximately normally distributed. The conclusion you make based on the statistical test will be either than the means are the same, or that the means are different. So, you have two ways your conclusion could be wrong: you could incorrectly conclude that the means are the same, or you could incorrectly conclude that the means are different.
So, with that background, here is what happens when you change the sample size from 24 to 120 (that is, 5 x larger sample size):
* The assumption of normality becomes less important. This is because of a principle known as the Central Limit Theorem, which basically means that sample means behave more like normally distributed values (even if the individual values are not normally distributed) as your sample size goes up. So, if your data is very nonnormal, it will affect your conclusion with a sample size of 24, but will have much less of an effect on your conclusion with a sample size of 120. So, you can have more confidence that your conclusion is correct with a sample size of 120 if the individual data is nonnormal.
* The risk of incorrectly concluding that the means are different is referred to as the alpha risk; this risk gets less as the sample size increases. Many people use an alpha risk of 5% when they make statistical conclusions…but you can make a different, wrong conclusion with a sample size of 24, with an alpha risk of 5%.
On the other hand, you can conclude any two things are significantly different with 5% confidence if you have a sample size in the thousands…which brings me to…
* The risk of incorrectly concluding that the means are the same is referred to as the Beta risk, and it includes two aspects – the risk of making the wrong conclusion, and the size of the difference you are trying to detect. As the sample size increases from 24 to 120, you can either reduce your risk of making the wrong conclusion…or reduce the size of the difference that your statistical test can say is really different.
So….with a sample size of 120, you can detect a small difference and conclude, yes, the means are different….even if the difference is so small that it doesn’t really matter. So, you also might bring in a “practicality” aspect to your conclusions with a sample size of 120.
I tried to keep the amount of statistical jargon down in this reply, but I’m not sure I succeeded…so, my apologies if I made this too complicated to follow.
Best of luck in analyzing your data!
Best regards,Eric Maass0May 10, 2008 at 2:05 pm #171880
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi G,
It would be a bit easier to follow if you had these in runs in standard order rather than run order (where the order is randomized), but this looks like a full factorial for 2x3x4 = 24 runs.
I am pretty sure that I could quickly show you how to do this in 12 runs rather than 24 runs…but you would have to analyze it as a custom design. I think I know how to do this in the Minitab environment, but it’s sort of tricky, so I’ll only tell you if you are really interested.
(Okay, scroll down if you are interested…at the risk of perhaps getting confused)
(Basically, it involves designing a fake version of the experiment – with 3 factors at two levels, half fraction = 2^(31) = 4 combinations, 3 replicates = 4×3 = 12 runs…then the three replicates become factor C, one of the factors becomes factor B with 2 levels, and Factor A becomes the combination of the other two factors, where 1 1 becomes A=1, 1 +1 becomes A=2, +1 1 becomes A=3, and +1 +1 becomes A = 4. Then, you bring it back into Minitab for Analysis by choosing customer design)
Best regards,Eric Maass0May 10, 2008 at 1:53 pm #171879
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello G,
Sorry, I can’t find that problem in the RSM book by Montgomery I have, but let me try to help anyway, while you await more ideas from another responder:
For two factors in a Central Composite Design, the minimum number of runs – without replications of corner points or centerpoints – would be 2^2 (corner points) + 2×2 (star points) +1 (centerpoint) = 9. So, I would guess that the 12 runs in the example is for adding 3 more centerpoints, for a total of 4 centerpoints:2^2 (corner points) + 2×2 (star points) +4 (centerpoint replicated) = 12.
Minitab’s algorithm seems to recommend the number of centerpoints based on a goal of Uniform Precision; however, I generally use 3 centerpoints because I want to use fewer runs, and 3 replicates are generally sufficient to provide an estimate of pure error for the purposes of having good ttest and Ftest values for determining which factors have a statistically significant effect on the response.
I generally use only 1 replicate per corner point, to keep the size of the experiment reasonable. Replication of center points adds just one run at a time, whereas replication of corner points adds 2^nk runs at a time.
For a central composite design with two factors, the recommended value of alpha (coded value for the star points) would be the fourth root or 2^2, which would be 1.414. The exact value of alpha is not critical – it is desirable for “rotatability”, but variation from the idea value for rotatability does not cause major problems in my experience – so I would suggest using a value of 1.5 for alpha to make the values for the star points have just a couple of significant digits.
My personal recommendation would be to explore the area where the optimum seems to lie by having the centerpoint in the center of where the optimum likely lies, have the corner points touching the edges of where the optimium seems to lie, but the star points explore just a little further. However, if you are very comfortable that you know where the optimum lies, you might change this and have the star points touch the edges of where the optimum likely exists, which will give you more information (and thus probably more confidence) in locating the optimum point or points within the space explored.
Best regards,Eric0February 26, 2008 at 7:48 am #169003
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.JeroenM,
What you are describing is similar to MSA for destructive testing.
Minitab suggests an approach using Nested …I’ll paste the relevant sections from their Help Manual:
==============================
Gage R&R Study (Nested)
Gage repeatabilityand reproducibility
studies determine how much of your observed process variation is due to measurement system variation
. Use Gage R&R Study (Nested) when each part is measured by only one operator, such as in destructive testing
. In destructive testing, the measured characteristic is different after the measurement process than it was at the beginning. Crash testing is an example of destructive testing.
Gage R&R Study (Nested) uses the ANOVA method for assessing repeatability and reproducibility. The ANOVA method goes one step further and breaks down reproducibility into its operator, and operatorbypart, components.
If you need to use destructive testing, you must be able to assume that all parts within a single batch are identical enough to claim that they are the same part. If you are unable to make that assumption then parttopart variation within a batch will mask the measurement system variation.
If you can make that assumption, then choosing between a crossed or nested Gage R&R Study for destructive testing depends on how your measurement process is set up. If all operators measure parts from each batch, then use Gage R&R Study (Crossed). If each batch is only measured by a single operator, then you must use Gage R&R Study (Nested). In fact, whenever operators measure unique parts, you have a nested design.Gage R&R Study (Nested)0February 26, 2008 at 7:32 am #169002
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi Jodee,
Yes, I think your game plan should work!
Best regards,Eric0February 24, 2008 at 4:50 pm #168937
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hi BJKeith,
I have an idea / suggestion for your consideration:
Can you ask the project lead to obtain three estimates (most likely, practical worst case, practical best case) for each of the steps rather than just one estimate? Then, you could build this into a Monte Carlo Simulation model for the total cycle time, and compare that to the distribution of the actual total cycle time.
Best regards,Eric0February 24, 2008 at 4:46 pm #168936
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Hello ATA,
I think you would want to compare the means and the standard deviations.
First you would check the assumptions; if these are weights of different samples taken at different times, you probably can assume that independence is a reasonable assumption. However, you would want to check that each set of data (earlier process and improved process) can be approximated with the normal distribution.
If so, you could do a twosample ttest (or ANOVA, if you prefer) to compare the means, and an Fratio to compare the standard deviations…but, I can already tell you that the Fratio test will show that they standard deviations are not significantly different.
If either of the two distributions deviate too far from a normal distribution, then you would want to compare the medians rather than the means using a nonparametric test – the MannWhitney test.
Best regards,Eric0February 24, 2008 at 4:40 pm #168934
Eric MaassParticipant@poetengineer Include @poetengineer in your post and this person will
be notified via email.Dan,
The Minitab Help system explains how Capability Analysis is handled for nonnormal distributions in two parts, which I’ve combined below.
Best regards,Eric
===============
Capability Analysis (Nonnormal Distribution)
Use to produce a process capabilityreport when your data do not follow a normal distribution
. The report includes a capability histogram
overlaid with a distribution
curve, and a table of overall capability
statistics. The report also includes statistics of the process data, such as the mean, distribution parameter estimates, target
(if you enter one), and process specifications; the actual overall capability (Pp
, Ppk, PPU, and PPL
), and the observed
and expected overall performance
. Minitab bases the calculations on maximum likelihood estimates of the distribution parameters, rather than mean and variance
estimates as in the normal case. You can use the report to visually assess the distribution of the process relative to the target, whether the data follow a specified distribution, and whether the process is capable of meeting the specifications consistently.
==================================
Nonnormal distribution:
Minitab calculates the capability statistics using 0.13th, 50th, and 99.87th percentiles for the distribution used in the analysis. The distance between the 99.87th and 0.13th percentiles is equivalent to the