iSixSigma

newbie

Forum Replies Created

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 174 total)
  • Author
    Posts
  • #187276

    newbie
    Participant

    Nececitas hablar con tu instructor de seis sigma de tu certificacion.  Usualmente, tienes que hacer un projecto y presentar un examen. Pero es la responsibilidad de tu instructor para explicarte lo que nececitas para certificar…depende en la programa.

    0
    #187275

    newbie
    Participant

    Hola Gabriel,
    No se cuando puedes consequir material para diseno de experimentos en espanol en linea o aqui, pero hay muchos libros de DOE en espanol en Amazon.com, por ejemplo:
    Inferencia Estadistica y Diseno de Experimentos (Spanish Edition) by Roberto Mariano Garcia (Paperback – May 2004)

    5 new from $56.62
    2 used from $125.74
     
    Buen Suerte y feliz navidad de EEUU!

    0
    #187274

    newbie
    Participant

    An A3 approach works well.  Keep it largely graphical/pictoral and customize it to your audience.  Get as much feedback as possible (and incorporate it where you can obviously) from the critical stakeholders / sponsor during its creation.  Good luck.

    0
    #187016

    newbie
    Participant

    irrelevant

    0
    #187009

    newbie
    Participant

    You want to influence behavior, which can be nicely defined using the following 5 principles of influence:

    Principle – Do the right thing based on reg, data, policy, or ethic
    Expert – those that know the work best should design it 
    Referrent – People like you…your a swell human being
    Coercive – You get give people things and take them away
    Legitamit – Your authority is formally recognized
    Your ability to influence will be maxmized when you can incorporate as many of the 5 principles as possible for any given situation and the principles at the top are more powerful than the principles at the bottom.  For example:
    Working to effect change because it is “right thing to do”, getting the SME to design the change, be likable in the process, giving them something they want or taking something away they don’t want, and having the action visibly supported via the level of management closest to the woker (not upper management) puts you in a decent position to affect change.

    0
    #187007

    newbie
    Participant

    Process capability has no relationship to process stability.  Stable processes are those that contain common-cause only variation. This is evidenced by a control chart that depicts all data points randomly dispersed around the mean and within the control limits.  If you are using continuous data, you should be using a Variability Chart with your Xbar Chart (ie R or S chart).  You would want to confirm the variaility within your subgroups (ie R or S Chart) is in control before you can reliably assess the variability between your subgroups (X Chart).  In terms of using a non-standard subgroup size, you want to check into how that might effect your chart.  Good luck.

    0
    #186689

    newbie
    Participant

    Wow.

    0
    #186492

    newbie
    Participant

    Thanks Doc!!

    0
    #186488

    newbie
    Participant

    Hey Doc,
    So what do you reccomend in terms of baselining the capability of a given process?  I have followed the threads as to the weakness of sigma, but don’t recall the solution (if one was ever provided)….What are your reccomendations for a metric?  Thanks!!

    0
    #186480

    newbie
    Participant

    That was excellent!  But could you explain the comment:  Assuming normally distributed performance in statistical control, you might as well be randomly handing out $100 bills
    Thanks!

    0
    #186479

    newbie
    Participant

    Go back to the charter and clarify what the project effort is about and the primary metric.  This will tell you what information is needed and thus, the tool suited for that purpose.  If you are looking at reducing time or cost, then a VSM current state would be a good choice.  If you want to highlight decision points and the people or departments that make them, a swim-lane or activity-based pmap would do the trick, etc.
    Again, I would get back to basics and look in the charter and focus on the primary metric and the resulting forecasted business case.  Good luck.
     

    0
    #186464

    newbie
    Participant

    It is a good question.  I would say you can report process capability in any of the aforementioned forms, and there are numerous tables out there that practicioners use to do just that. Just be consistant in your methods. 
    I think the advantage to sigma/Z score (if there is one) is that it is used when dealing with continuous data, where you aren’t counting defects at all, you simply compare the allowed variability   (ie tolerance) to the actual variability (std dev) in your process to determine capability.  Hence, you simply calculate the mean and std dev and predict future capability from there.  And I would think you would need a much smaller sample size to get the same level of precision than if you used an attribute approach.
    But it will be interesting to hear the opinion of those more experienced.

    0
    #186385

    newbie
    Participant

    You will usually have multiple customers who consume one or more of the process outputs.  One approach: 
    Use your SIPOC/COPIS with “$100 Dollar Exercise” and the Leadership Team to prioritize either:  outputs or customers.  Now pick the one that makes the most sense from a business perspective and work from there.  Good luck.

    0
    #186369

    newbie
    Participant

    It is not a measure that falls into a “pass/fail” classification, but is useful due to its relativity to both the process mean, and when used as a comparison measure.  Although I have never found any hard information on this, I have heard a rule of thumb that has worked well for me:  A COV above 10% will point toward potential issues with process variability. Again, ROT.

    0
    #186344

    newbie
    Participant

    Hey Doc,
    I assume you transformed it and/or ran the ANOVA / Moods Median for the helluva it….anything interesting result?
    With 93% of the data being zero, I assume your data is counting  occurances of some event in a given time period/sample area  (ie 93% of the time, there is no occurance of a given event).  
    If so, would it be practical/useful to translate your poisson data into a continuous format, such as a MTBF or ‘Mean Time Betweeen Events’ in this case?  
    Just a thought….good luck.

    0
    #186339

    newbie
    Participant

    Hoshin Planning, Balanced Scorcard, Annual Reviews, etc….Determining through data where the organization is underperforming is where you want to be in terms of project identification and selection…..good luck.

    0
    #186250

    newbie
    Participant

    I believe it is referring to the null hypothesis which would state that there is no statistical difference (ie the difference is 0.0) between the two porportions, while the alternate hypothesis states there is a statistical difference between the two proportions (ie Alternative: not equal).
    Zero refers to no difference.  Not equal refers to a true difference.
    Good luck.

    0
    #186185

    newbie
    Participant

    A couple ideas.  First, running an ANOVA on non-normal data isn’t necessarily an incorrect approach.  It is largely robust to this assumption, and depending on the number of distinct categories you find in your data set, an argument could be made for it being a valid approach.  If not, you could always investigate transforming the data (ie Box Cox) then running the analysis, or simply using a non-parametric test like a Moods Median Test for statistical difference.  If prediction is of interest, you could fit a regression model , which would give you both statistical significance of the 3 predictors as well as the magnitude and direction of their effects on the response.  
    What did the statistician tell  you as to why he chose that tool?  Why are you, a LSS person, overseeing the stat guy?  He isnt qualified to act as your statistical SME?
     

    0
    #186171

    newbie
    Participant

    Thanks everybody!  Always appreciated.

    0
    #186152

    newbie
    Participant

    So:
    Lets say I have two product variants in a given value stream – A & B,  a takt time of 2 minutes, and a production monument – acting as a bottleneck – that has an effective machine cycle time of 4 minutes.  By adding a supermarket pull system for A, B  – I have done the following:

    Enabled the value stream to produce to Takt
    Reduced the exit rate from 4 min to 2 min
    Increased WIP by 2x at the bottleneck
    Potentially altered Lead Time (increased WIP but reduced exit rate)
    Potentailly altered process velocity ( V = #  Steps / LT)
    Am I tracking? Comments?

    0
    #186116

    newbie
    Participant

    Check the Blue Bar to your left under Tools and Templates…or google either term….Wikipedia is always a decent place to begin for a general introduction to the idea or model you think might be helpful.

    0
    #186115

    newbie
    Participant

    If using MTB, it is a relatively simple matter of determining the distribution type (> Individual Distribution Identification…I think) and then calculating the Cp metric.  Or you could transform it using a Box Cox or Johnson Transformation, again with a Cp follow-on.  Or you could use DPMO or DPU, which doesnt require normality to be effective.  Good luck.

    0
    #186113

    newbie
    Participant

    I am just spitballing here, but it appears as if you would like to forecast future throughput for a given facility.  If that is the case, then you would need to use a predictive model of some kind.  You could explore the use of regression or monte carlo simulation, among others.  Good luck.

    0
    #186087

    newbie
    Participant

    Ok…that is making more sense. So you have lead time being a funcction of inventory and process velocity being a function of lead time with velocity measured as ‘moves per unit of time’.  So reducing inventory reduces lead time and increase process velocity.
    Then how is the added inventory of a supermarket pull system explained?  As a compromise (as contributed by the other poster – thanks!) in achieving a reduced customer response time / customer delivery window?  Thanks!

    0
    #186077

    newbie
    Participant

    I was thinking about the idea of pull systems and of controlling inventory levels using supermarkets.  In this case, arent you building additional inventories in the form of the supermarkets, with the result being you are able reduce the lead time it takes to fill a given order?
    And if this is the case, then isnt that contrary to Little’s Law?  I know I gotta be wrong on this, but I just cant sort it out…thanks!

    0
    #62489

    newbie
    Participant

    I doubt this question can be answered with any degree of accuracy without specific work study on a given OR and its operations.  But if what you are looking for is some form of IE work study numbers, I imagine you could google that under time and motion studies + OR, etc. 
    Again, I would caution you in using such generic data.

    0
    #185953

    newbie
    Participant

    Running an analysis depends entirely on first determining what type of data you are working with.  Your data set is naturally bounded by zero which is why your data set is skewed, which is what you would expect.  If 90% of your data is zero, then it is pretty clear that 90% of something is working or not working, depending on if your 0s are ‘good’ or ‘bad’.  Graphically, you could pull some stuff together to look at means, trends, etc. 
    If using MTB, use their HELP tab to determine what is needed to run which test.  In addition, you can check the BLUEBAR to your left and well as the dictionary above to determine data types.
    If you really want to run a hypothesis test to determine statistical difference, you can do it many ways.  You can transform/not transform the data and run ANOVA, etc….You can use a non-parametric like MANN, translate the data into categorical and run a Chi Sq Test for Independence, etc.
    But if 90% of your data contains a single value, I would consider the big picuture – focus on either the 10% or the 90%, depending on what ‘0’ means to your process.
    Best of luck.   Verify this advice…there is always someone with a better idea.
     

    0
    #185928

    newbie
    Participant

    Use a lean approach to capture time elements.  Using a VSM coupled with a work study breakdown or value-add assessment would allow you to break each process down into its individual work elements and value category, and then assign them time elements.  You could also use a general Work Study to good effect.  SMED techniques could also be applied here, as I would suggest you approach this as part of a larger assingment – ie not just “where is the time being spent” but more along the lines of “Reducing Set-up Time” or “Reducing Cycle Time” or Improving Capacity”, etc.  You might as well make a good CI effort out of it from the start.  Good luck.

    0
    #185927

    newbie
    Participant

    You might want to consider formal operational definitions coupled with a MSA of some kind, especially if you are not the only one assigning scores. 
    It might also be useful to take HornJM’s advice a step further and break out individual category scores in your worksheet, which might add to your analysis, as once you build the worksheet, the ‘extra’ analysis costs you very little.
    And don’t forget to check the uunderlying assumptions of your chosen analytics.  For example, if you take HornJM’s advice and run ANOVAs, do your data sets approximate normal (not so important), act independently (important), and display equal variance (important)? 
    Just be sure your ‘continuous’ data behaves according to the analytics of choice.
    Just my 2 cents

    0
    #185860

    newbie
    Participant

    You can use it wherever you need to assess and quantifiy the risks of something failing.  It is often useful to combine the FMEA with another tool, such as a process map, that can then serve as the reference point for using the tool.  But again, use it when you need to quantify risk as it pertains to a process, product, service, design, etc.  Good luck.

    0
    #185834

    newbie
    Participant

    Robert stole my answer again…especially the part about the max condition index thingy….I was gonna say that…..

    0
    #185833

    newbie
    Participant

    You pick alpha, and Beta is calculated as a result of your other testing parameters (min difference, variability, alpha, test type, etc).
    Knowing beta is only relevent to me as it pertains to power               (ie Power = 1-beta).  Knowing the Power of your test is critical, especially if you fail to reject the null (ie state there is no difference).  
    If you report out that your HT shows no real difference exists with an 85% probability of detecting one if did (ie Power = .85), that is a whole different idea then if you report out the HT shows no real difference exists (ie fail to reject the null) with a 30% chance (ie power = .3) of detecting it if it did.
    I only care about Beta as it pertains to the calculation of the Power of my test (just me..could be wrong), and really only care about it if I fail to reject the null (again, just me…could be wrong). 
    As a rule of thumb, always run a Power and Sample Size before your analysis so you know exactly what sample size you need for a desired set of conditions, or what power you are stuck with if your sample size is limited.
    Good luck.  Just my two cents.

    0
    #185803

    newbie
    Participant

    If using MTB and wanting to determine your Beta when running a one-way ANOVA, I suppose you could run a Power and Sample Size test retroactively to determine power, and then algebraically calcualte the Beta from there (power = 1-Beta).
    There is probably an easier way, but that should work. 

    0
    #62474

    newbie
    Participant

    I think you are getting DOE and the subsequent analysis confused.  If you are taking happenstance data in ordinal form (ie likert satisfaction scales) and then running it against various factor settings to determine the magnitude and direction of relationships of various factors on customer satisfaction, then I believe a better description of what you are doing is it running either ordinal logistic or linear regression.
    So I think the better question to ask is can you regress historical customer satisfaction data against various predictors?  sure…try it and see if it gives you anything interesting FOR FURTHER STUDY.  The key point here is that you can use what you find to indicated potential relationships between factors, but you can’t say one thing causes another….correlation, sure….influence or causation, nope….not unless you control the experimental environment..ie run the DOE.

    0
    #185734

    newbie
    Participant

    Dieter,
    For the last time, I don’t want to touch your monkey……now is the time in the program when we dance……

    0
    #185733

    newbie
    Participant

    I don’t think you will find a concensus as to which capability metric is “the best” but there are situations where one might be a better fit than another.
    With DPMO, how you define your opportunties will make all the difference.  DPU is often quite useful in that it is a bit more responsive to effects than DPMO, can be charted (ie U Chart),  and you can take the inverse natural log of it to show RTY (or the convert it the other way around).  Whatever you use, trying to capture every potential failure for your DPMO calculation would be an error….do you really have the time and resources to validate 1000 different potential failures in a single part?  If it has failed in the past and hasnt been mistake proofed, include it.  If it has never failed or hasnt failed in the last 12 months (for example), why consume resources to prove what is already known?
    I would say the metric isnt as important as picking one, standardizing its operational definitions and methodology for a given location or process, and then showing improvement in the metric and within that value stream over time.   Good luck.

    0
    #185724

    newbie
    Participant

    Its the kung fu grip

    0
    #185723

    newbie
    Participant

    Productivity is simply a ratio of what is generated from the process relative to its inputs.  Increasing productivity then would require one to increase the output of the process while holding the inputs constant, hold the outputs constant while decreasing the inputs, or increasing the outputs while decreasing the inputs. 
    So what is your input and output?  arrests/manhour?  reports/manhour? etc.  Define this and work from their.
    Also, beware productivity gains that result in no hard cost savings or increased throughput.  These can be captured under cost avoidance, but that is debatable as a financial metric.
    The key is to establish a direct link from the input (ie manhours) to some meaningful output (ie number of arrests, etc) and then work to move the ratio in a favorable direction.
    Just my 2 cents

    0
    #185722

    newbie
    Participant

    You are lumping together a variety of different pursuits there…Basic project management would involve chartering up the idea, creating the work structure breakdown, and then managing to it. 
    If looking for a project charter, google it or check the blue bar to your left…you will find something helpful.  VOC is something else…should be captured after the chartering process is complete….Stakeholder Analysis should be done on or around the time you are validating the VOC.  They are all DEFINE deliverables.
    Again, they are different deliverables and each a subject to themselves.  Good luck.

    0
    #185667

    newbie
    Participant

    Dave T gave a good synopsis of the LSS opinion on this issue.  I would simply add that practically speaking, I would want to control vendor quality, not methodology.  Partnering with them in an effort to improve their processes is the ideal situation, but this is a long-term solution, and often difficult to implement….a practical, short-term containment strategy would be to validate your requirements (ie specs) in the service level agreement and bid it out, using vendor quality as one of several metrics (ie dashboard of metrics) used to assess and select a given vendor.  This solves your quality problem and motivates your previous supplier to make the needed changes required to win back your business….Critical to this approach is that you don’t make the mistake of confusing price vs cost…your current vendor may be giving you the best price, but what is the true cost to your organization as a result of their poor quality?  Consider the harmful financial effects of having to “inspect out” poor quality, rework, scrap, longer than average processing times, cost of capital, unusable inventory, opportunity cost, transporation, handling, etc.  Low-price vendors with poor quality almost always cost you more in the end.  My 2 cents – Good luck.

    0
    #185390

    newbie
    Participant

    Gentlemen, Thanks again for your help. Have a great weekend.

    0
    #185336

    newbie
    Participant

    Darth/Robert,
    You guys are awesome.  Thanks so much for the help.  Robert one question (as if there is ever just one question):  I would be using MTB 15 as my software…My log reg choice include binary, ordinal, and nominal….Which of these would fit into the Poisson Reg definition?   Thanks!

    0
    #185313

    newbie
    Participant

    So keeping the count visible is desirable, got it.  I will await further pearls of wisdom from Robert.  Thanks Darth!

    0
    #185311

    newbie
    Participant

    Doc,
    ok, that’s what I thought…a predictive quality would be preferred. 

    0
    #185309

    newbie
    Participant

    The data is happenstance, and the number of counts within the data set range from 0-6, with “0” being an operator didnt use the tooling at all for the day, and 1-6 being the number of times the tooling was used by the operator.  Although currently, no value exists beyond 6, it is expected that the counts should move steadily upward. 
    Is there a preferred method to deal with this upper category?  I was thinking I would categorize it at >6 for now, and then once the numbers moved into the double digits, I could begin treating it as continuous.  ??

    0
    #185304

    newbie
    Participant

    Or would Chi Sq be a simpler approach, whereby I could run two-variable comparisons to determine significance using count data as the Response with two categorical variables?  I would rather run the log reg, as it will give me significance and magnitude…..I think…
    Thanks for the help!

    0
    #185303

    newbie
    Participant

    Hi Doc,
    Thanks so much for the response!  I will be working primarily with categorical factors, with a limited number of covariates. 
    Is there a preference in which log reg you use (ie greater precision, etc)?
    THANKS!

    0
    #185110

    newbie
    Participant

    From afar, this looks like a big old mess, as you have a variety of things working against you:

    You are working with Attribute Data
    You are working within summarized Attribute Data
    You are working with summarized Attribute Data whose collection methods are unknown, taken via a measurement system whose validity is untested / or unrecorded.
    I would scrap the happenstance, summarized, ordinal data set – and start anew, using continuous data, sample plan, data collection plan, and MSA.  Good data makes for easy analytics.   
    But Darth will steer you right.
    Best of luck.
     

    0
    #185109

    newbie
    Participant

    That brings up a question I have had for some time.  Is not DOE simply a data collection method whose power lies in the control of the design? For example, I can take happenstance data and run it through my desired form of analysis – ANOVA, Regression, etc – and then take experimentally derived data, using the exact same analytical approach, and achieve more powerful results in the form of causation?  In other words, isn’t the devil in the design, not the analytics?  Thanks!

    0
    #184095

    newbie
    Participant

    No idea what that means….the FMEA is historically useful in MEASURE when combined with the process map inputs (SIPOC) to determine the elements to be used in your data collection plan.  It can be used as a process or product-centered format, for example asking how each process step can fail or how each part or connection can fail and then quantifying those failures.  You essentially use the FMEA to identify and quantify the risks to your process which should then be validated during data collection.

    0
    #184093

    newbie
    Participant

    I read the same article and had the same thought, although I assumed they were referring to using a transformed data set or to the posted distribution types (ie poisson, binary, etc) as they approximated normality.

    0
    #184009

    newbie
    Participant

    Would not a monte carlo-eque simulation be a better choice for such a thing?  I would be leary that the SPC would not be precise enough to adequately forecast from….but certainly time-series forecasting is possible…If using MTB, I believe you would find time-series options and explanations for potential financial modeling…..just my 2 cents….

    0
    #183926

    newbie
    Participant

    DC

    0
    #183910

    newbie
    Participant

    Does the discussion hinge on whether we are talking about binomial vs possion distributions?  If we are simply counting defectives at the end of a system of operations, then a simple Yfp would work to show our quality as it appears to our customers as defectives (binomial)….If we are looking at the defects that exist in the individual operations, then a Yrtp (possion) calculation is a different animal, is it not?  

    0
    #183908

    newbie
    Participant

    Ron,
    Are we looking to detect the cause of the defective when it occurs or simply the failure mode?  For example, do we want to assuredly detect the defective unit or the cause of that defective unit?  Thanks!

    0
    #183088

    newbie
    Participant

    Thanks everyone!  I will follow up on some of the references mentioned.  I appreciate the help!

    0
    #182193

    newbie
    Participant

    Ok, so you break it out as planned in your Available Work Time calculations for takt, capacity, etc?  I suppose the argument for not breaking it out would be so that it is not “baked in” as a planned event?  Thanks!

    0
    #182067

    newbie
    Participant

    Thanks Darth. 
    And Obiwan, it was understanding as I read through prior threads and talked to more experienced practicioners, that the simple distinctions offered in LSS BB courses for continuous vs. discrete data are not always such “cut and dried” issues.  I have heard of 10 point Likert scales being analyzed as continuous data, minimum levels of distinct categories being used to define when to use parametric hypothesis testing, etc.  So any if I am misinformed, I would appreciate a clarification….thanks!
     

    0
    #182040

    newbie
    Participant

    Thanks guys!

    0
    #62286

    newbie
    Participant

    RobI am working on a similar project. Could you please email me the tool at [email protected]

    0
    #181485

    newbie
    Participant

    We are talking about an existing cell that recieves product directly from the dock operation, manually processes it, and returns it to the dock for transport.   I don’t believe a pull system is applicable here, as there is no upstream or downstream processing, so no WIP, and thus no need for synchronizing multiple process steps.  But I could be wrong…
    Darth’s earlier advice as to using an imperfect predictive demand model and using a triaged buffer stock to absorb demand flucuations appears to be a good fit…but I welcome any advice…thanks!

    0
    #181447

    newbie
    Participant

    That helps a great deal.  As always, thanks Doc!

    0
    #181439

    newbie
    Participant

    Additional Info:
    The process is one in which product is picked up from various retail outlets, delivered to a processing facility (on a fixed transportation schedule) for sortation for downstream delivery.    
    The volume of incoming product is variable across hours in the day, days of the week, and times of year…with daily volume COV running btw 50-65 percent.  It is this daily variability that is causing the most issues…it results in ops staffing the line to run at full capacity to handle these daily and unpredictale surges, allowing for a great deal of idle time and excessive work hours.
    This is a service operation, so building to inventory is not an option.  In addition, the facility must meet strict departure times, so simply extending the workday (ie OT) to handle high volumes is off the table. 
    The idea of structuring the operations to run according to a traditional image of takt time does not appear to be feasible, as the flow of incoming product volume is highly variable and does not appear to be controllable.
    Soooo – my thought was perhaps by shortening the production window (ie Available Work Time) you could build inventory at the front of the process and then when you reach some point of critical mass, staff accordingly, and toggle in additional operators to accomodate unexpected volume during the shift, minimizing idle time while still having the ability to meet demand.  We would not be operating according to traditional takt, but with volume changing significantly and uncontrollably on a daily basis, I thought one way to dampen this sin wave would be to actually build inventory at the front and then standardize the downstream processing….ok.  done.  sorry for the book.  Thoughts?

    0
    #181423

    newbie
    Participant

    Hi Mike,
    I was looking through some of the isixsigma resources and found your excel table for sample sizes.  Is it correct to say one would need to determine your the (d/s) ratio and then look across at the applicable alpha / beta and choose the sample size accordingly…is this right?  Thanks!

    0
    #179257

    newbie
    Participant

    Thanks Stan!

    0
    #179256

    newbie
    Participant

    Thanks doc!

    0
    #179253

    newbie
    Participant

    Thanks for the feedback Stan, but I am still not fully tracking:

    I assume you are telling me it is better to take an initial sample and calculate a preliminary p, then base the sample size calculation on this value.
    I am still confused how one incorporates an overall sample size calcualtion into np chart sample size.
    And for “np should be greater than or equal to 5”, what is np?  The number of nonconformances in the sample size?  In other words, the sample size in the np chart should be large enough to ensure at least 5 nonconformances per subgroup?  Thanks for your time!

    0
    #179246

    newbie
    Participant

    Once you validate the measurement system, I believe your next step is to determine the stability of your process prior to the capability study.  This requires that you preserved the time order of your data collection.  If you have then you should be able to develop a subgrouping strategy that makes sense (ie minimizes the within subgroup” variation) using 75 observations. 
    I am not aware of any hard and fast rules for determining subgroup size or frequency, although with continuous data and your limited data set, I would think n=3 and k=25 would be appropriate with an XbarR Chart.  
    If stable, you can conduct your capability study based on distribution type and specification characteristics. 
    75 continuous measures are sufficient to hang your statistical hat on I believe, and if reporting short-term capability, I believe it should work as your initial baseline measure. 
    But double-check this info.  Best of luck.

    0
    #179245

    newbie
    Participant

    Have you already addressed the accuracy and precision of your measurement system and determined stability and distribution type?

    0
    #179080

    newbie
    Participant

    It simply answers the question of how reliable is my measurement system?  Is the variation I am seeing resulting primarily from the parts I am measuring or is it being introduced via the system I am using to gather the data? 
    All measurement systems will introduce variabililty into your findings, the question is simply, how much and is that acceptable to me as the investigator? 
    And this is no “chicken and egg” scenario.  You are using a known standard to determine accuracy and calibration and then precision is determined by simple comparisons to prior measurements (ie repeatability and reproducability).
    Thats my take.  You gotta find out how reliable the tachometer is before you start calculating MPG.
    Happy holidays everyone.

    0
    #179076

    newbie
    Participant

    Got it.  Thanks Doc, and Merry Xmas to you too!

    0
    #178461

    newbie
    Participant

    Per usual, thanks a million doc.  I have already been to amazon and will read up on it.  Thanks!

    0
    #178407

    newbie
    Participant

    Ok, so this is going back to the “signal to noise” ratio, right?  Whereby we compare the “within group” variation to the “between group” variation with the resulting ratio being the Rsq?  Hence, a small error term relative to the overall variability (1-SSerror / SStotal) would result in a high Rsq, while a poorly predicting model (ie high SSerror) would give me a poor Rsq. Am I tracking?
     

    0
    #178393

    newbie
    Participant

    Ok, so I am a bit confused.  I understand the previous thread in regression, since we have a response and at least one predictor, but an ANOVA (one way)  uses a single factor with multiple levels, so what are we considering the response and what are we considering the predictor?   
    Example:  There are 4 machines and I would like to run a test of means on cycle time to determine if a true difference exists.  I run the ANOVA (one way) and I get a p value of 0.000.  So I know with a high degree of confience that at least one cycle time is statistically different.  Now I look at the Rsq value of 4.5%.  The model (what) explains only 4.5% of the variability in (what)?  If the respsonse is cycle time, what is the predictor or factor?  The “levels”? 
    Sorry for being so slow on this…..Thanks again!

    0
    #178387

    newbie
    Participant

    Super!  Thanks Doc.

    0
    #178363

    newbie
    Participant

    Sorry guys, I intended to ask how one interprets and practically explains the R and R sq terms that accompany the ANOVA.  Thanks!

    0
    #178020

    newbie
    Participant

    If using a survey instrument that uses discrete numerical scales (ie 1-5, 1-10, etc) it is considered ordinal data, and it doesnt lend itself to the statistical desciptions of skew and kurtosis you mentioned, which is best applied when using continuous data, or data that is measured using a continuous scale or gage.  
    You can still provide the insight you are looking for simply by using percentages or frequencies of your the scale information.
    Its just a matter of using the right technique with the right data type, but the ideas are the same. 

    0
    #177820

    newbie
    Participant

    Not sure why you are calculating sigma values if what you want is to determine statistical difference between two machines.  Would this research question not be answered with a 2 proportion test (since you are dealing with discrete/binary data)?
    Just my 2 cents….good luck!

    0
    #177246

    newbie
    Participant

    Thanks so much!

    0
    #177179

    newbie
    Participant

    Alastair,
    Thanks so much for all the help!

    0
    #177174

    newbie
    Participant

    Alastair,
    How do you prioritize the list of potentially causal factors using SME, etc without some form of opinion gathering (ie voting). 
    The result should be a list of largest to smallest factors
    Were you referring to the factors being listed after they were regressed by magnitude and direction of effect?  Thanks!

    0
    #177149

    newbie
    Participant

    Alastair,
    One more for you – I will be working with happenstance data, so it is my understanding our findings can speak to correlation and prediction between the response and the predictors, but will not allow us to speak in terms of causation and influence. If this is a correct summary, then I can see that idea wreaking havoc with the bosses:  “You can’t say that X causes Y…what good is that to us then?” etc…
    How would you make this explanation at an executive level?  Thanks!!

    0
    #177148

    newbie
    Participant

    Thanks Alastair! That answers that…..

    0
    #177096

    newbie
    Participant

    DLW,
    Thanks!  That helps…the part sits in its own container and waits  for a period of time until the container is full and moves onto for further processing or until the shift is over, at which point it moves on for further processing. 
    But knowing that batching, where it doesnt delay downstream operations would be appropriate.
    Thanks again!

    0
    #176893

    newbie
    Participant

    …that’s embarassing…thanks Stan….

    0
    #176891

    newbie
    Participant

    Stan, Chris….thanks so much!

    0
    #176889

    newbie
    Participant

    Good deal..thank you very much!

    0
    #176879

    newbie
    Participant

    Thanks!!

    0
    #176869

    newbie
    Participant

    Thanks doc.

    0
    #176813

    newbie
    Participant

    Dr R,
    Would not a basic 2 sample t test (regardless of skew) and variable control chart (staged to show before and after) provide the data requested by the original poster?  
     

    0
    #176698

    newbie
    Participant

    DLW,
    I will do that.  Thanks again!

    0
    #176696

    newbie
    Participant

    DLW,
    Thanks for taking the time. And your second paragraph has described the situationa accurately.  Hence, my dilemma. My thought was that inventory and overtime are not available, yet over-staffing is not a desireable option either.
    So my idea was to look at right-sizing into multiple, simple, rapidly deployable cells (which is do-able) and establish a hourly management time frame, giving me 7 opportunities a day to assess demand and adjust to flucuating volumes by activating one to three additinal cells. That way I can maintain a consistant work distribution pattern for each operator (ie Processing Time will stay the same for each cell) and yet as volume changes (ie takt changes) I can adjust the Cycle Time of the line accordingly….but I am talking out of my posterior here, as I have never done anything like this…..
    Any advice would be greatly appreciated.  And thanks again.  This has been really helpful for me. 

    0
    #176693

    newbie
    Participant

    DLW,
    It is a service-based, all-manual process operating on a single 8 hr shift with a daily shipping deadline, so variable demand is not countered with additional inventory or overtime, but by toggling operators to add cell capacity.
    The incoming material is a document that direclty represents demand. It has an infomational component added to it and is then trucked out to the next downstream entity. The variablity lies in the volume and mix  delivered (exhibits hourly & daily flucuations as well as seasonality), not in the time of its arrival (ie arrives to schedule).  And yes the “outside supplier” is the external customer or consumer.
    So I am trying to determine how to treat the time element here….I just need to know how to account for this permanent variability in demand as it pertains to cell design so I can figure out how best to manage  capacity.
    For example, my available work time calculation of Gross Available Time less Planned Downtime does not appear to be accurate depiction of the situation…as this will be yield an unrealistic value of AWT and thus skew takt, which in turn will effect such things as planned cycle time, # operators, toggling calculations, work distribution patterns, etc…
    THANKS!!!!!!!

    0
    #176686

    newbie
    Participant

    Hi everyone,
    Thanks for the feedback, but I am still not getting the answer I need.  I have a transactional process that receives material from an outside supplier in a highly variable manner. 
    How does this impact your takt calculation, particularly in the area of Available Work Time….the cell is available to do work for 420 min/shift, but the variability of material arrival results in the cell sitting idle for periods of time….how is this captured?   Thanks!

    0
    #176643

    newbie
    Participant

    J,
    The problem is that I have a transactional process in which the processing is dependent upon highly variable demand.  I want to know how this impacts such calculations as Takt….the average daily demand is stable over time, but highly variable in the short-term (ie a given day) and the cell is technically available to do work for a given shift, but with a highly sporadic flow of incoming product. 
    Thanks! 

    0
    #176420

    newbie
    Participant

    Thanks doc!

    0
    #176401

    newbie
    Participant

    Hi Robert,
    what would then be the point of using non-parametrics in a test of means if:
    – they are mean based and
    – parametric testing is largely robust to non-normality?
    Where would you want to use them>
    THANKS!

    0
    #176231

    newbie
    Participant

    Hi Stan,
    I know that transformation for the purpose of studying capability isnt necessary, but is it harmful to an analysis? 
    And I can’t help feeling as if capability indices themselves are somewhat bogus.  When I read about all the assumptions and variability that go into the various models, would not using a standardized procedure on actual process performance to capture yield / confidence intervals be easier to apply, explain, and manage to? 
    Am I off base here?
    Thank you.

    0
    #176057

    newbie
    Participant

    Very good….ok, I am going with the “torso check”then…..thanks doc!

    0
    #176050

    newbie
    Participant

    Ok, Dr D, so tell me this then…Where do I actually need normality (or need to invoke the CLT through the use of subgrouping)?  I am not seeing the need for it in a alot of areas (despite what my training said), if the following is accurate:

    Capability Studies

    Use a yield-based metric like DPU, PPM, DPMO
    Use MTB to first determine the appropriate distribution type and conduct the capability study accordingly
    Transform the data using the appropriate power setting
    Variability Studies

    Control Charts are robust to it (I know, I know)
    Analytics

    Test of means are largely robust
    Non-parametrics are available
    Regression does not require it (residuals only)
    DOE (essentially a data collection method whereby the  analytical method used – regression, anova, etc – is also robust to a violation of normality)
    It appears that it is necessary in the Power and Sample Size calculations (unless it is a test of means I suppose – so t, z, and ANOVAs are out), when wanting to study Cp/Cpk/Pp/Ppk, or when calculating CI (although I suspect there is a way around the latter). 
    Where am I wrong?
    THANKS!
     

    0
Viewing 100 posts - 1 through 100 (of 174 total)