iSixSigma

Andejrad Ich

Forum Replies Created

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 182 total)
  • Author
    Posts
  • #141069

    Andejrad Ich
    Participant

    So to tie this up then…
    A plane loaded with oranges leaves San Diego at 10:00 travelling north at 600 mph.  A second plane loaded with apples departs Seattle at 10:10 on the same day travelling south at 550 mph.  If the air traffic controller misses step #14 in a forest, does the Pope hear it?
    Andejrad Ich

    0
    #141064

    Andejrad Ich
    Participant

    Right…but might such a situation be an indication that the operators are being burdened with unnecessary apples.  If the internal apples don’t contribute directly to delivery of oranges, then why demand/measure apples?  Why make operators hate their jobs trying to deliver apples when they have little or nothing to do with oranges and shouldn’t great care be taken to line these up?
    Andejrad Ich

    0
    #141061

    Andejrad Ich
    Participant

    And I think the point I might be trying to get at may be (…see, I don’t even know for sure):
    Can a processing line with 5 machines have each machine operating at 2 or 3 sigma, but what comes off the end of the 5 machine process measure as 7 sigma?
    And if so, isn’t it possible to mistakenly beat the hell out of the 5 machine operators for their machines’ 2 or 3 sigma performance when the process is actually the plant’s stellar 7 sigma performer?
    Andejrad Ich

    0
    #141060

    Andejrad Ich
    Participant

    On the subject of landing planes, and sigma measurement perspective:
    The airport (or system of all airports) see — say 1000 planes enter into their process every day and each is a defect opportunity (i.e., an opportunity for a crash).  So the airport system has 1000 defect opportunities every day and that system/process measures 7 sigma or better.
    But, for the air traffic controller(s), he has a — say — 25 step procedure to follow for each landing, and missing any of the 25 would be a defect (not that every one would lead to a crash necessarily, but missing a step is a defect).  So the air traffic controller(s) have 25000 defect opportunities each day. 
    Similarly, the pilot(s) in the plane have a 50 step procedure to execute (throwing switches, radio procedure, checking gages, etc.).  So the pilot(s) have 50000 defect opportunities each day. 
    So, let’s say, the air traffic controllers routinely skip step #12 on their procedure (they just can’t seem to remember that one) and the pilots tend to miss #7, #22, #27, and #34 on theirs (after all, it’s busy in there during landing).  So in this case, the air traffic controllers are operating at about 40,000 DPMO, which is 3.2 sigma.  The pilots are operating at about 80,000 DPMO, which is 2.9 sigma.  And yet the planes are landing successfully at better 7 sigma. 
    Just for the purpose of thinking/discussing, anyone have any thoughts, insights, comments on this scenario?  It’s not a quiz or riddle.  There are no right answers (because there isn’t a question).  Just….anyone have any thoughts to input?
    Andejrad Ich

    0
    #140942

    Andejrad Ich
    Participant

    wrong

    0
    #140923

    Andejrad Ich
    Participant

    Entering all your data as though a single population and calculating +/- 3 sigma limits is totally okay for X (individual X) charts (although the original calculation method would estimate sigma from average moving range).
    For an X-bar chart, the limits are based on +/- 3 standard errors of means (i.e., 3 x sigma/sqrt n), with sigma estimated using average range of all sample groups (i.e. R-bar/d2)
    Inappropriate to use the RMS function?  It’s just a question of calculation methods to arrive at the same, correct conclusions.
    Andejrad Ich

    0
    #140911

    Andejrad Ich
    Participant

    Will you be doing true “new product development” or are you simply tooling up to produce an already designed product?  If the latter, then you are probably already too late for DFSS.  If you are simply setting up your line(s) to run something new to your plant, then simply FMEA your process(es). 
    If you are doing true new product development (actually coming up with ideas how the product should be/act/perform), then you would be better served to combine de Bono and DFSS rather than your existing system and DFSS. 
    Andejrad Ich

    0
    #140899

    Andejrad Ich
    Participant

    In fact, charts and even pictures have proven to be too abstract;  I try to present eveything using hand puppets (the little bunny always makes them laugh). 
    Andejrad Ich

    0
    #140745

    Andejrad Ich
    Participant

    Okay, you’re getting really close with this one.  I can almost see this.  The process is extremely simple; it’s one box called “consume electricity” and it’s a 24 hour, 7 day per week operation (with lots of variation based on loading).  The input X is supplied, metered electricity.  Defects are KWH (as you said) consumed during peak times/rates.  Or maybe they are KWH beyond 3 sigma of the mean (…consumption is unlikely to be normal however).  But you mentioned KWH above a set target (I’m going to use “limit” here for target).  So what we come to is a process of consumption having a measurable, histogramable distribution of something like KW used per hour AND a specified limit.  Now we have a variation-minimizing …six sigma…project to ensure that the mean consumption is 6 standard deviations from that limit.  You could study/analyze causes of variation and reduce that.  Now that IS a charterable six sigma project. 
    That makes total sense (see, I’m not unreasonable).  You can calculate sigma levels and so demonstrate improvement as a result of the experiment.  But to just say, “I’m doing a six sigma project to reduce our electricity consumption/costs” (without a limit out there) doesn’t cut it 
    Andejrad Ich

    0
    #140743

    Andejrad Ich
    Participant

    If a guy were sitting in an office with a bank of switches, and his job was to evaluate requests for power in his facility, and he followed a printed, laminated flowchart of if/then steps to evaluate each request and decide whether to turn a requester’s switch on or not, then this would be 6s for sure (because his decision-making process could result in some poor – yes, even considered “defective” – decisions). 
    Just because an electrician grabs a pipe wrench out of a plumber’s toolbox to bend conduit (and it’s a PERFECT tool for that) doesn’t make it a plumbing job. 
    Andejrad Ich
     

    0
    #140729

    Andejrad Ich
    Participant

    Your process outputs a finished product having some occurrence of defective units.  That some of those defective units are attributable to defective materials makes those input materials a contributing X.  Your output sigma level is what it is;  that you know most of the defects in your 2.4 sigma process are caused by incoming materials doesn’t justify calling it a 4.6 sigma process if you count only the defects we create ourselves.  That would be a little like saying, “Oh yeah, we won the game last night 5 to 2 …you know…if you count only the runs they scored as a result of our errors.”
    Andejrad Ich

    0
    #140712

    Andejrad Ich
    Participant

    Yes, it is quite likely you are having difficulty creating a charter for this project in the context of Six Sigma.  It is difficult to apply the core principles of Define, Measure, Analyze, Improve, & Control to the process of electricity consumption.  Come to think of it, maybe electricity consumption isn’t a process at all (?).  If we use our imagination a little, I suppose you could say you are trying to reduce variation in electricity consumption (e.g., to avoid those high-surcharge peak consumption times/rates).  But even if you were to execute this as a perfect 6s project, how would you then control electricity consumption in the future?  I mean, if your facility needs to consume electricity to run the combination of equipment that runs between 10 a.m. and 2 p.m. every day, then….aren’t you going to run that equipment then because that’s what your facility does to stay in business?  Or are you willing to/planning to move some of the operation off to another time of day – but even if you do that, aren’t you still consuming the electricity?  Isn’t the most single significant x affecting this cost the rate/kwh you are charged.  As such, shouldn’t you conclude that all you need to do is simply pay the power company less per kwh?  Wait….they probably won’t let you do that, will they?
    You’re wondering what my point is?  My point is this….fine…histogram your areas of greatest consumption, reschedule some of the consumption off to other times of the day in the name of variation reduction, chart the percentage of time lights are found on in an empty office, map your process of electricity consumption (wait…maybe you can’t do that with this, can you?), calculate your sigma level improvement as a result of the project (wait…you can’t really do that either because there aren’t any defects to apply it to)……..wait….on second thought…
    …maybe this isn’t a six sigma project after all (…gawd, if only people would get to this more often).  It’s a facility engineering project, yes (e.g., we don’t run the heating system AND air conditioning at the same time), but it’s not a six sigma project.  There’s no process to improve here (and 6s is/was/is supposed to be about improving processes).  If you construct a charter that frames this project terms of six sigma, you are doing legitimate six sigma a severe disservice. 
    Andejrad Ich

    0
    #140594

    Andejrad Ich
    Participant

    The Republic of Texas — Where fireants hold the Lieutenant Governor’s position and 22 seats in the Statehouse.
    Andejrad Ich

    0
    #140558

    Andejrad Ich
    Participant

    I’m from the United States of America (…it borders Texas).
    Andejrad Ich

    0
    #140554

    Andejrad Ich
    Participant

    Hey Joe,
    You ended your question with a preposition.  I think, technically, you meant to ask, “From what country are you?”
    Andejrad Ich

    0
    #140190

    Andejrad Ich
    Participant

    The 66,667 DPMO is correct.
    Sigma level is 3.0.  The assumption of two tails is inappropriate (see linked calculator given previously or the conversion table linked here).
    http://www.moresteam.com/toolbox/t414.cfm
    Andejrad Ich
     

    0
    #140185

    Andejrad Ich
    Participant

    https://www.isixsigma.com/sixsigma/six_sigma_calculator.asp
    opportunities = 300 x 5 = 1500
    defects = 100
     

    0
    #140181

    Andejrad Ich
    Participant

    I will guess you have in normal production some clearly “good” parts, some clearly “bad” parts, and an in-between zone including some parts that even human inspectors have difficulty categorizing (let alone a vision system). 
    Your vision system doesn’t know “good” from “bad”;  it counts pixels.  That’s really all it can do.  It’s all it’s designed to do.  It then compares its pixel count to a limit (limits) the technician entered and then accepts or rejects.  That fact makes it kind of a questionable “inspector” but also a perfect candidate for a gage R&R.  BECAUSE all you need to do is select 10 identified parts (“1-10” or “A- J”) varying from what you consider to be clearly “good” to clearly “bad”, run them through the vision system (noting current technician settings for thresholds, etc.) something like 10 times each and calculate the repeatability of pixel counts for the area of interest to you (like the “area of black ink versus the white background”) for each identified part.  That’s the sytem’s pixel-counting repeatability. 
    The problem with vision is that you will want to include fluctuation in source lighting, part posture, and placement within all possible locations within the camera’s field in capturing that variation. 
    So, now you’re saying, “But how does that help me prove my vision system is adequate/capable?”  You will have demonstrated all you can ask of your vision system (repeatability of pixel count).  Knowing your variation in pixel counting, you can then set limits anywhere between rejecting 99.999% of “bad” parts (at the expense of many inappropriately rejected “good” parts) and accepting 99.999% of  “good” parts (at the expense of accepting many inappropriately accepted “bad” parts). 
    Testing a large number of test samples to demonstrate the system’s correct designation of “good” versus “bad” wouldn’t be a test of the vision system repeatability;  it would be a test of the technician’s ability to select appropriate limits. 
    Andejrad Ich

    0
    #140134

    Andejrad Ich
    Participant

    “3.4 defects per million opportunities” — as I recall, that language is pretty prevalent in really…let me think…yes…ALL six sigma literature.  In fact, that sigma level concept really is the basis of Six Sigma
    Now…given that…if your furnace has a specified window of operation in place to assure acceptable quality (i.e., no defects) of what comes out of it….and if overloading it causes temperature(s) outside that specified window, then — yes — you would want to take steps to reduce the temperature variation because it is incapable with respect to the existing specified limits and excursions outside those specified limits are possible at an unacceptable level. 
    You can try to complicate it, but that’s really all there is to it (…i.e., it’s about NOT producing defects). 
    Andejrad Ich

    0
    #140131

    Andejrad Ich
    Participant

    I like the article.  I thought I was the only one who just directly says what I mean. 
    Six Sigma’s DMAIC (and the PI story before it) are great methods for folks who have never solved a problem in their lives — i.e., if you follow these steps, even you can step your way to a conclusion about a solution.
    We boiled 6s down to – okay, just do the process map with the x’s and y’s, FMEA the process, and implement appropriate controls — AND, if you need to optimize settings along the way, then we’ll help you design an experiment. 
    On the other hand, some of the article reminded me of a Product Manager once telling me, “A good manager just knows the right things to do;  that’s what makes him a good manager.”
    That thinking also seems to support the notion that the operator knows the equipment and the best way to run it.  But I’ve seen Taguchi work too many times — to the drop-jawed amazement of operators — to think that’s true.   
    There has to be a path through the vast middle ground where selecting/supporting decisions is guided by a combination of common sense and math.  Anyway, that’s the path I’ve been trying to trek for all these years … while managers who “just know the right things to do” have promoted each other around me on both the left and right. 
    Andejrad Ich

    0
    #140074

    Andejrad Ich
    Participant

    (sorry…couldn’t help myself)
    seriously….the only legitimate replacment for inspection of process output is demonstrated control of all process parameters affecting acceptability of output.  You would need to see demonstrated control of temp, pressure, dwell, viscosity, moisture content, etc. to KNOW product is acceptable without actually inspecting it  (i.e., SPC charts for all contributing factors). 
    Andejrad Ich

    0
    #140073

    Andejrad Ich
    Participant

    Take about a 6 x 6 inch piece of cardboard and divide it into a pie of about 6 to 8 sections.  Mark one section “0 defects” and the rest of the sections with other numbers of defects in any order up to the maximum amount of defects you would like to ever find in a batch of your product (if you don’t want to EVER reject a batch, make sure the maximum number is less than your allowable limit.  if you think it would be okay to reject some batches, then make one section HIGHER than your allowable limit).  Fix a free-spinning needle/pointer to the center of the board (test that it can spin around freely when you flick it with your finger).  When any batch of finished product is completed, just flick the needle/pointer and write down the number indicated by where the needle/pointer comes to rest.  Use that can be your defect rate. 
    Andejrad Ich

    0
    #140072

    Andejrad Ich
    Participant

    …okay…
    well….then…….unless those were really X, mR charts (aka: IX, mR or I, mR), then I would like to retract my earlier answer and resubmit a revised answer…”Yes, you’ve been calculating limits wrong (technically) all these years.”
    But, hey, so your charts weren’t textbook preparations.  So what?  If they accomplished what you wanted them to do, then don’t sweat it.  Contrary to the beliefs and desires of many OCD members of our geeky little community, there really are no SPC troopers out there patroling the industry, battering down doors and dragging off well-meaning process and quality engineers, well-preserved in slabs of carbonite. 
    Andejrad Ich

    0
    #140062

    Andejrad Ich
    Participant

    I think the point of all this really is that if you’ve been using the A2*R-bar constants found in all SPC textbooks, those calculated limits really are simply an alternate route/estimation of calculating the very same principle of 3*sigma/sqrt(n).  So, no, you haven’t been calculating limits wrong all these years.
    Andejrad Ich

    0
    #140059

    Andejrad Ich
    Participant

     

        probability of:

            
    n
    n or less 
    > n

    0
    0.36040
    0.36040
    0.63960

    1
    0.37541
    0.73581
    0.26419

    2
    0.18771
    0.92352
    0.07648

    3
    0.05996
    0.98348
    0.01652

    4
    0.01374
    0.99722
    0.00278

    5
    0.00240
    0.99962
    0.00038

    6
    0.00033
    0.99996
    0.00004

    7
    0.00004
    1.00000
    0.00000

    8
    0.00000
    1.00000
    0.00000

    9
    0.00000
    1.00000
    0.00000

    10
    0.00000
    1.00000
    0.00000
     
    Given a very large population of 4% purple marbles and 96% yellow marbles, the probability of drawing a purple marble “n” times in 25 independent draws is given above.   So the probability that as many as 7 of the 25 completed units include a defective B part is at least calculable. 
    Andejrad Ich

    0
    #139980

    Andejrad Ich
    Participant

    I printed a copy for my notebook of worthwhile references (…although the solid content didn’t really require being propped up by the academic credential reference…but bravo to you on the education). 
    Andejrad Ich

    0
    #139973

    Andejrad Ich
    Participant

    Dawn,
    Guess what?  “…then a report came through from another group about people who aren’t using it, so now they want to know why”
    You know what that means, don’t you?  You really just want to survey “people who aren’t using it.”  That’s your population.
    If that’s the case, and you in fact restrict your survey to that sub-population, then I don’t think it will matter whether you survey online or by phone as long as you do so randomly. 
    If you want to know what would-be users don’t like about the software, target users who used to use it but have since stopped (presumably because they don’t like it).
    Also, as Bob has reminded me, there is little about this that makes it conceivable as a Six Sigma project (in fact, I haven’t been able to make it fit yet – but I like survey science because it’s really easy to mess them up). 
    Andejrad Ich

    0
    #139971

    Andejrad Ich
    Participant

    …sort of my point, Bob…
    …that’s why I suggested just calling top users to find out what they think (there’s really nothing six sigma about any of it – and the desire for statistical results is likely rooted in some manager’s intense desire to justify his/her own prior decision to implement a software change (after all, it’s time for mid-year reviews). 
    Andejrad Ich

    0
    #139959

    Andejrad Ich
    Participant

    You are trying to assess the general opinion of a population of users.  So, do that by selecting and using one sampling method (…and not using a second method to confirm the results of a first method).  And you are right, given optional response, the complainers will tend to answer/vent and so your results will be skewed (so getting subjects of a random selection on the phone would be a better indication of the population).
    Really, instead of making this a statistics exercise, have you considered the practicality of just calling your highest volume users and asking them?…(maybe that’s only something like only the top 100 or 80 or even 20 of your 5000). 
    Andejrad Ich

    0
    #139759

    Andejrad Ich
    Participant

    binomial probabilities:
    finding zero defectives in a sample of 5 units provides 90% confidence that no more than 46% of the sampled population is defective.
    finding zero defectives of 10 sample units provides 90% confidence that no more than 23% is defective
    go to this link and play with sample sizes and confidences until you arrive at a negoatiated solution that provides the assurance you are looking for.
    http://www.maxim-ic.com/tools/calculators/index.cfm/path/qa/calc_id/ltpd

    0
    #139666

    Andejrad Ich
    Participant

    This is way easy.  Watch (okay….measure) how long people are staying at work.  If they aren’t going home to their families at the end the day, then you are asking too much of them. 
    Andejrad Ich

    0
    #139529

    Andejrad Ich
    Participant

    If you visualize the normal process as something like…1. potential customer visits the site…2. reads about the product…..3. visits the customer service area….4. places an order …..and then use your “i  m  a  g  i  n  a  t  i  o  n”  to say that failure to place an order is a “defective output”, then this could be 6s.  BUT…even at that, there will be far more visitors who do NOT place orders than who do, and so this would never ever be anything even closely resembling a six sigma process (80% defective).  So why try to call it six sigma?  If the website occasiaonally drops customers trying to place their orders – that would be a 6s project.  If it occasiaonlly scrambles customer names and shipping addresses or gets the item number wrong, then that would be a 6s project.  If the potential customer chooses to leave the site because of something he/she read there (maybe because of phrasing or maybe because it was just plain wrong), then that would be a 6s project.  Increasing sales can be 6s if increasing facilitation of the order processing system resulted in increased sales.  But to say, “Gather round everyone.  Now I know we’ve all been concerned about slipping sales lately, but….don’t worry….’cause we have a Six Sigma consultant here and he’s going to help us boost our sales numbers” is just……..a wrong wrong application of a great tool.  And, really, Darth, seriously, aren’t you just a little ashamed of taking checks for doing that?
    Andejrad Ich

    0
    #139519

    Andejrad Ich
    Participant

    …okay….
    ….deep breaths…..(I kinda feel like I’ve landed a role in Invasion of the Body Snatchers)
    Let’s try this as an alternative litmus test:  how can a “Six Sigma” project to redesign a company website provide $50k – $100k in hard savings? (……see, I’m not even going to mention defect reduction or sigma level here)
    Andejrad Ich (…who apparently needs to take the rest of the day off and sit and read a novel or something)

    0
    #139518

    Andejrad Ich
    Participant

    Actually, “Six Sigma” established the use of existing tools on a backbone of DMAIC (an invention of Six Sigma) to identify process parameters contributing to successful/failing output (i.e., defects) and measuring the situation in sigma level (which requires a spec to do).  It’s designed to minimize defects.  Variation reduction and process centering is indended as part of the package…..to reduce defects. 
    Show me a “Six Sigma” project based on a process that can’t be flow charted as a true process and having no measurable occurrence of defective output that can be calculated as a sigma level and I will argue every time that it is NOT a “Six Sigma” project.  I’m NOT saying that whatever it is you are talking about isn’t necessarily worth working on;  I’m just saying that clear definition divides truly value-adding Six Sigma work from all the crap that is passed off as Six Sigma work.  I’m just saying DON’T call those other things “Six Sigma.” 
    “Hey, Tom increased sales by 10% last month!”  “Great!  Is there any way we can write that up as a Six Sigma project?”  “You bet we can!”
    To say that an effort to reduce cycle time of a widget line is a viable candidate for a Six Sigma project is complete nonsense.  If you’re trying to reduce the generation of defective widgets, then YES…..THAT is Six Sigma.  Go ahead and do all the industrial engineering time studies you want…but good luck reporting your improvements in sigma level. 

    0
    #139492

    Andejrad Ich
    Participant

    By sticking to a clear definition of what is a real Six Sigma project (which I think I’ve been pretty clear about), we can all avoid the many crappy B.S. project presentations in PowerPoint we have all had to sit through where the candidate has something on the Define slide and something on the Analyze slide and something on the Improve slide and in the end we’re all expected to clap when…in reality….it was all crap.  Ill-concieved, poorly executed, inconclusive…but, because it was in PowerPoint, it must have been well done.  I’ve seen absolute nonsense presented as completed Six Sigma projects.  Starting with a clear definition of what is Six Sigma brings the whole effort up to a new level of execution – and it makes it easier for the project owner to actually KNOW what he/she is supposed to be doing.  My point is – with clear definitions and the clear expectations that accompany them – we can avoid the total crap that too often hangs its hat on the Six Sigma sign (and so brings 6s down every time it happens).
    Andejrad Ich
    P.S. I think I hear mom’s car in the drive.

    0
    #139489

    Andejrad Ich
    Participant

    But if you haven’t experienced a defect……(as in units out-of-spec)
    ….then why on Earth would you be working on improving the process?
    If you aren’t running occasionally out of spec (because maybe you don’t even have a spec), who cares how wide the distribution is?
    And again, if you don’t have a spec and so can’t generate defects, then you can’t calculate a sigma level……and….therefore…… the effort can’t be a “Six Sigma” project.  Just call it something else…call it what it is. 
    Andejrad Ich
    P.S.  I’m really only posting today because my mom isn’t home and I’m bored

    0
    #139487

    Andejrad Ich
    Participant

    “Lean Sigma” evolved solely as an attempt to legitimize the “you can use this stuff to improve anything” sales pitch. 
    If you are doing time studies to remove non-valued added steps, that’s right-out-of-the-book “Lean.”
    Basic litmus test question:  what is the process sigma level before improvement effort and what is the process sigma level after the project is completed?  There’s only one way to calculate that — using occurrence of generated defects.  If you can’t assemble the project in terms of measurable generated defects (per million opportunities), then there can’t be a sigma level, and if there isn’t a sigma level, then it can’t have been a six sigma project. 
    Andejrad Ich

    0
    #139484

    Andejrad Ich
    Participant

    It’s not pigeon-holing; it’s specializing. 
    This idea that “Six Sigma can be used to fix anything” is killing it. 
    I have a great machine I want to sell you.  It’s a lawn mower.  It’s a great lawn mower.  It’s built to cut grass in lawns.  But you can also use it to trim your hedges.  But that’s not all — you can use it to vacuum your carpets.  And with these attachments, it can rototill your garden, be used as a concrete leveler, remove wallpaper, strip paint, spray paint, chip sticks, dig postholes, powerwash your car and serve as a floor jack and a bug zapper. 
    See, the more you read there, the more you have to question how good it really is as a lawn mower.  Six Sigma is a great lawn mower.  If you want to do those other things, there are tools built for those too. 
    Andejrad Ich 

    0
    #139483

    Andejrad Ich
    Participant

    Variation reduction is part of six sigma for the very purpose of narrowing distributions such that smaller tails lap over the specified limits – resulting in………fewer units out of spec (i.e., fewer defects).
    Andejrad Ich

    0
    #139481

    Andejrad Ich
    Participant

    6s has lost momentum in industry because of consultants bent upon selling it as, “Oh yeah, you can use this stuff for everything” (instead of reserving it for the purpose for which it was designed – defect reduction) and novices who are unclear where its intent and power truly lie.  Yes, you can use many of the shared tools to analyze lean applications – THAT DOES NOT MAKE A TIME STUDY EFFORT INTENDED TO ELIMINATE NON-VALUE ADDED TASKS A SIX SIGMA PROJECT!! 

    0
    #139450

    Andejrad Ich
    Participant

    This would be a really great thread on really any LEAN website;  it has no business being posted here.  Six Sigma is about defects reduction. 
    Andejrad Ich

    0
    #139440

    Andejrad Ich
    Participant

    Okay…..again……time studies and throughput improvement is NOT a Six Sigma project; it’s a Lean project.  Six Sigma is designed to identify a process with output having some level of defects that can be reduced/improved.  The confusion about what does and does not constitute a 6s project is what dilutes its effectiveness.  Are burrs your defect?  Is there a way to reduce the occurrence of burrs?  What aspects of the process tend to contribute to the creation of burrs?  Are there machine settings that can be optimized to reduce the creation of burrs?  How would I design an experiment to optimize those settings?  What controls do I put in place to make sure the process creates minimal burrs from now on?  — See, that is Six Sigma. 
    Andejrad Ich

    0
    #139351

    Andejrad Ich
    Participant

    If you are controlling a process and are truly interested in being tipped off to the presence of assignable cause variation, then you use all the tests because they are all mathematically equivalent.  To not use all the tests is a like asking a detective to solve a murder, “but use only these 3 clues and ignore those other 5 over there.”
    Andejrad Ich

    0
    #139349

    Andejrad Ich
    Participant

    See….this is what I mean;  people think the single point beyond 3 sigma control limits has some special status among tests for special cause variation (even “classic” status).  ALL the western electric tests are equally, mathematically indicative of presence of special cause variation.  The single point beyond 3 sigma test gets the most attention because it’s the easiest for novices to recognize.  It’s not a question of “adding trending examples.”  Trending is JUST as indicative of the presence of special cause variation as is a single point outside 3 sigma limits.  You either watch the process for special cause variation by using all the tests or you don’t — that’s it — that’s how it works.
    Andejrad Ich

    0
    #139206

    Andejrad Ich
    Participant

    One more step, btw —
    You mention min temp to seal, max temp before burning — those are really spec limits in your system/world.  The min temperature might be easy to specify (e.g., it has to get to 156 C or it won’t melt no matter how much time or pressure you use – and it probably even states such a temp on your material spec).  Max temp to prevent burning would be complicated by dwell certainly and perhaps pressure.  But once having established these values as your “never transgress” limits, then make sure your controller operates capably within that specified window. 
    Also, (I’m just curious about this one) you’re using a roller-sealer aren’t you and not a bar-sealer (i.e., you’re not clamping down on something like 3 ft of seam until the timer goes off, opening the clamp, indexing the material forward and clamping down again…….indexing…clamping……indexing…clamping…..right?). 
    Andejrad Ich

    0
    #139204

    Andejrad Ich
    Participant

    temp x pressure x dwell is like the classic application/example for all of this stuff.
    Truly control those (…and maybe variation in materials – which are likely already specified but perhaps not in reliable control) and finished seam testing may well prove to be unnecessary.
    Andejrad Ich

    0
    #139196

    Andejrad Ich
    Participant

    It sort of depends on whose evaluation method you use (and how old it is)….but the R&R by the original definition has nothing to do with “accuracy”;  the idea there was that, if precise (i.e., a small pattern of indicated values) you can always calibrate the thing to read the “correct” average value (even if by just adding 0.007 to any value indicated by the gage – i.e., “it’s always 0.007 low”).  Now, software tends to give you the option to enter the gage block value/true value(s) of the test items and then reports any average detected difference as “bias” in the final report.  Then, as before, you have the option to get your wrench out and crank the thing up 0.007 if you want to. 
    Andejrad Ich

    0
    #139195

    Andejrad Ich
    Participant

    okay – it’s the seams ……….
    are these seams…..sewn?     melted/welded together? 
    what are the variables about sewing or melting/welding (e.g., control of temperature and pressure) that produce a good seam?  have you DOE’d to know you have optimized those variables settings?  are you controlling those known significant X’s well enough to know the seams being produced are good?
    Andejrad Ich

    0
    #139012

    Andejrad Ich
    Participant

    I have to admit…this certainly is by no stretch of the imagination a six sigma project.  If it were, there would be an identifiable process having a defective output that could be improved.  If your project involved the occurrence of defects generated during the transfer of knowledge from brain to paper, then this would be a six sigma candidate.
    Andejrad Ich

    0
    #138979

    Andejrad Ich
    Participant

    1.  What do you mean by “my yield is attribute”? 
    2.  Do you know what difference in results you will consider to be a significant difference? (i.e., is 87% yield versus 85% a significant improvement?  do you want that to be considered to be significant?  do you want 0.5% to be “significant”?)
     

    0
    #138963

    Andejrad Ich
    Participant

    If you are truly looking to detect variation likely due to the presence of some special cause in your process, then all the tests already apply;  they are all derived to detect sample anomalies that are mathematically unlikely to occur in a truly “in-control”/”statistically constant” process.  If you pick and choose to apply some and not others, then you are actually deciding that you aren’t all that interested in sensitivity to the presence of special causes.  There is a tendency for people to think it’s all about the single point outside UCL or LCL (i.e., that it’s the most important, real rule and the other rules are…….well……”the other lesser rules”).  In reality the single point outside UCL or LCL is just one of the many equally important tools to detect the presence of special cause variation.  So it isn’t a question of whether you have to use all the rules;  it’s the question, “Are you interested in detecting the presence of special cause variation or aren’t you?”
    Andejrad Ich

    0
    #138753

    Andejrad Ich
    Participant

    For output product, your board either works or does not work;  if you build (and ship?) a thousand boards, how many working boards will the customer receive.  That’s what they care about.
    Internally, your automated insertion system can create (creates) some number of defects that ultimately will contribute to non-working boards being output by your process.  Similarly, your wave soldering operation can create unsoldered joints that contribute to the creation of non-working boards.  These are Y’s from those subprocesses only and you are interested in knowing what causes these internal processing failures……….but you don’t count all the solder joints as opportunities when measuring sigma level being delivered to the customer.  He just wants all the boards he received to work and couldn’t care less that your soldering success rate is 598 correctly joints soldered out of the 600 on every board (sigma level = 4.2).  He only knows that (in this example) only 35 out of 1000 delivered boards actually work (sigma level = plant closure). 
    Andejrad Ich

    0
    #138504

    Andejrad Ich
    Participant

    Do your study including all the variability normally associated with EACH length (i.e., separate studies).  You are trying to determine the repeatability of the gage while also demonstrating its ability to discern differences of interest within a measurement application (i.e., inspection of a single lot).  That is to say, if you include pieces of all your lengths, your study will show you (falsely) that the variation contributed by the gage (…the whole point of the study) is infinitesimal in comparison to the variation of your parts (…because you included different sizes and the 22 inch pieces are WAAYYY different from the 6 inch pieces).
    Andejrad

    0
    #138066

    Andejrad Ich
    Participant

    Let’s hope they’re all in Toyotas on this drive to bankruptcy or they’ll never make it.
    Andejrad Ich

    0
    #138009

    Andejrad Ich
    Participant

    Play with numbers in the tool below until you end up with the failures/sample size/confidence result you are looking for.  LTPD is the greatest level of defectives (at confidence) within the population when the number of failures entered is detected in sample size N.
    http://www.maxim-ic.com/tools/calculators/index.cfm/path/qa/calc_id/ltpd
    Andejrad Ich

    0
    #137915

    Andejrad Ich
    Participant

    Your answer lies within the calculator found below.  But you have to understand that the conclusion can be reached only within a level of confidence and allowable defectives rate someone has to decide (whether that is by you or your customer).  Enter potential sample sizes and confidences to find the highest level of defectives within your 5000 pieces and/or enter highest level of allowable defectives and confidence to find necessary sample size (in which zero defectives are found).
    http://www.maxim-ic.com/tools/calculators/index.cfm/path/qa/calc_id/ltpd
    Andejrad Ich

    0
    #137908

    Andejrad Ich
    Participant
    #137389

    Andejrad Ich
    Participant

    Unsure what your industry is, but here goes:  It sounds like you are (correctly) NOT trying to FMEA the followup to corrective promises made by Customer Service Reps — i.e., they are forced to respond to complaints resulting from defective product your company shipped out — as such, the only role CS should play is to provide summary data/pareto regarding the defects being reported by the customers (then CS is really done – and you are already thru M of DMAIC).  Then you need to get your engineering/production team to A, I, and C your way to no more produced defects for customers to complain about.
    Andejrad Ich

    0
    #136920

    Andejrad Ich
    Participant

    Limits are based upon process variation.  Sampling frequency is based upon…..and NOBODY gets this…..dollars.  How much product (i.e., how many dollars….how much rework and sorting) are you willing to risk before detecting that the process shifted?
    Apply more frequent sampling to the more expensive products (cost-risk based sampling frequency).  That’s all there is to it.
    (This is where others will cry out, “but shouldn’t it be based on some sort of known process stability/instability” to which I have to reply in advance, “if you have known process stability/instability, then you have a known out-of-control process — making it NOT a candidate for control charting anyway.”)
    Andejrad Ich

    0
    #136846

    Andejrad Ich
    Participant

    1.  “How much of the haystack do I have to inspect to say there are no needles in it?”
    2.  ANSI Z1.4 or ANSI Z1.9
    3.  OR you could identify the process variables that affect process/product success/failure and control those to prevent defects (There’s a whole field of “Six Sigma” intended to accomplish that – you should google “Six Sigma” and see if you can find anything on “Six Sigma”).
    Andejrad Ich

    0
    #136666

    Andejrad Ich
    Participant

    Limits are based upon process variation.  Sampling frequency is based upon…..and nobody gets this…..dollars.  How much product (i.e., how many dollars….how much rework and sorting) are you willing to risk before detecting that the process shifted?
    Apply more frequent sampling to the more expensive products (cost-risk based sampling frequency).  That’s all there is to it.
    (This is where others will cry out, “but shouldn’t it be based on some sort of known process stability/instability” to which I have to reply in advance, “if you have known process stability/instability, then you have a known out-of-control process — making it NOT a candidate for control charting anyway.”)
    Andejrad Ich

    0
    #136663

    Andejrad Ich
    Participant

    Oh, and by the way, it was Copernicus who proposed a heliocentric solar system; Galileo was merely agreeing with him.
    Andejrad Ich

    0
    #136656

    Andejrad Ich
    Participant

    I think I can get you started:
    Question 1.  Why did you leave our organization? A. Strongly Agree; B. Agree; C. Neutral;  D. Disagree;  E. Strongly Disagree
    I’ve seen this a lot.  Best of luck and I think you will have a lot of success with that.
    Andejrad Ich

    0
    #136237

    Andejrad Ich
    Participant

    Unless you can find terms by which inventory management can be portrayed as a process with a measureable level of defective output, then there is nothing about this that makes it a six sigma project. 
    If you can determine and flowchart a process for inventory use, and materials held too long can be identified as a “defect”, then you might have a project.  If you can put a number on “too long”, then you have a USL to measure against. 
    Andejrad Ich

    0
    #136101

    Andejrad Ich
    Participant

    You are looking for a Lean site. 
    This is a Six Sigma site;  Six Sigma is about identifiable processes outputting some level of defects that can be measured as Sigma Level and then reducing that occurrence by applying an established set of analytical and improvement tools in a sequence known as DMAIC.
    Good luck with your Lean effort. 
    Andejrad Ich 

    0
    #135566

    Andejrad Ich
    Participant

    Wait a sec — you aren’t trying to do an attribute R&R to demonstrate that each inspector will determine the same % defective of a large inspected lot. 
    You can prepare an inspection set of 10 identified sample units (pieces) of varying levels of non-conformity, subject the set to the inspectors, and demonstrate that they each recognize the non-conformities repeatably and reproducibly.  It’s actually very similar to an R&R eval of a test instrument. 
    Andejrad Ich

    0
    #135429

    Andejrad Ich
    Participant

    You can’t fault the guy for having an early grasp of “economy of effort.”  This kind of thinking has even gotten some through Yale and into the White House.

    0
    #135376

    Andejrad Ich
    Participant

    Sorry, but the real answer you seek is implied in your question.  Do the work to unearth for real what causes your glueline failure, then control and chart those factors. 
    Andejrad Ich

    0
    #135354

    Andejrad Ich
    Participant

    Just how many acres are there in the field of Six Sigma?
    Andejrad Ich

    0
    #135182

    Andejrad Ich
    Participant

    Anise,
    Okay – in Minitab – run “Store Descriptive Statistics” (under Stat/Basic Statistics) looking for sum only (under the “Statistics” button) and using the date column for “by variables.”
    It will produce new columns to the right of existing for date/total/and N for each date.
    Andejrad Ich

    0
    #135109

    Andejrad Ich
    Participant

    Confusion over the question whether “this” should be a factor or a block.
    Groupthink agreement to not include a factor because “we never adjust that anyway.”
    Compromise in design because “that’s stupid; we would never run low temperature and low pressure at the same time.”
    Compromise in design because “we don’t want to do that many runs.”
    Compromise in design because “do we really need to do that many samples?”
    Compromise in design because “do we really need to do a repetition?”
    What appears to be the most stunning failure is to complete the experiment and ….nothing is a significant contributor.  Use reasonably broad ranges and include tests for interactions. 
    Andejrad Ich

    0
    #135105

    Andejrad Ich
    Participant

    This is absolutely all true (and likely correctable) waste.
    Just don’t call it a Six Sigma project.
    Andejrad Ich

    0
    #135089

    Andejrad Ich
    Participant

    You are welcome.
    Andejrad Ich

    0
    #135070

    Andejrad Ich
    Participant

    “Is there an identifiable, flowchartable process here having a defective output that can be improved?”  If there is, then Six Sigma is applicable. 
    If there are no specifications, there cannot be any defects (and certainly cannot be a calculated sigma level).  Anyone trying to apply Six Sigma to a project for which the question above cannot be anwered with a “yes” is misunderstanding and misapplying an otherwise respectable approach.  I’ve seen a lot of crap improvement projects (usually someone’s pet project – like “improve sales force efficiency”) get jammed into Six Sigma clothes.  And the truth is, they never end up looking good when you’re trying to explain how one part of the pile of B.S. fits in “Define” and another part of the collected B.S. fits in “Analyze”.  Except no one GETS that the problem is that the project should never have been pursues as Six Sigma.  No one gets that it’s a misfitting project, and Six Sigma ends up taking the rap.  The more you diverge from the pristine intent to minimize/eliminate defects output from an identifiable process, the more you risk making Six Sigma look like a foolish waste of time (or look like…..”rubbish”).  The more you try to claim that you can use Six Sigma to fix anything, the more you tear down the foundation of a great discipline WHEN APPLIED TO THE TYPE OF IMPROVEMENTS FOR WHICH IT WAS INTENDED.
    Andejrad Ich

    0
    #135056

    Andejrad Ich
    Participant

    This sounds just almost meaningful………until……
    ……..until you stop and think ‘oh yeah……sigma level’ — there can’t be any calculation of a sigma level. 
    If all legitimate applications of Six Sigma can be measured using a calculated sigma level.
    And the pricing process example does not allow calculation of sigma level.
    Then the pricing process example is not a legitimate application of Six Sigma.
    Andejrad Ich

    0
    #135055

    Andejrad Ich
    Participant

    And so, conducted as a Six Sigma project, it is determined that control of centering of the contributing sub-processes is critical to producing a centered, on-target final assembly. 
    …and the reason the final assembly needs to be centered?  That’s right — to minimize the occurrence of  DEFECTS produced out of specification.
    If it’s not about defects, then it’s (still) not about Six Sigma. 
    Andejrad Ich

    0
    #135033

    Andejrad Ich
    Participant

    …this should be good…….I’m anxious to see such an explanation that doesn’t include “to increase sigma level” – particularly when posted on a Six Sigma site.
    (such a post would be considered “defective”)
    Andejrad Ich
     

    0
    #135027

    Andejrad Ich
    Participant

    I hope that when you are adjusting the mean to “optimize performance”, you are doing so to minimize the probability of generating output out of specification (you know….to increase the sigma level….you know…the sigma level that indicates defect rate…….the sigma level that is the basis for Six Sigma…..that sigma level).
    Andejrad Ich

    0
    #135024

    Andejrad Ich
    Participant

    “performance”?
    See….. that’s what lean and industrial engineering are for.
    Six Sigma is about minimizing/eliminating defects.  If it’s not about eliminating defects, then it’s not an application for Six Sigma.
    Period.
    Andejrad Ich

    0
    #135020

    Andejrad Ich
    Participant

    “nothing to do with defects”?
    …then it has nothing to do with Six Sigma.
    Period.
    Andejrad Ich

    0
    #135013

    Andejrad Ich
    Participant

    Okay – this is classic, classic, classic muddying of Six Sigma.  6s may apply to some of these, but in order for that to be true each must fit the very simple test of….”Is there an identifiable, flowchartable process here having a defective output that can be improved?”
    So…for example….”lead generation” — is there a process here?  Let’s say we have identified a single system source of leads like “magazine insert card responses.”  Fine.  What is a defective output of that process?  Is a response card received in error because the reader simply didn’t understand what the product was a defective output of the process?  Are magazine insert card responses that don’t result in actual sales considered defective leads?  Is a response card sent in a magazine and not returned (because the reader isn’t at all interested in the product) considered a defect in the process of generating leads?
    “Pricing process” might work as a 6s project if you have standard, published pricing that is occasionally misquoted by sales clerks to callers placing orders by phone. 
    “Better forecasting” — ?  It sounds like this might be a process that can have a component of defective output (e.g., “what made us EVER think we would sell 22 million Condoleeza Rice bobble head dolls?!  – how did we come up with THAT?)
    BUT, while the industry is FULL of consultants who want to sell you on the idea that Six Sigma can be used to fix anything and everything, you would be hard pressed to legitimately apply Six Sigma to “marketing program effectiveness” or “sales force productivity” or “improving margin” (i.e., “we got the sale, but maybe they would have paid more” – is this a defect in a pricing process?  If you are trying to improve margin by reducing cost, then that is a lean and straightup industial engineering project) or “sales team effectiveness.”  Again ….”Is there an identifiable, flowchartable process here having a defective output that can be improved?”
    Andejrad Ich

    0
    #134981

    Andejrad Ich
    Participant

    1.  n=10
    2.  you DO know “s” (because you can calculate it for the 10 data in the sample) but you DON’T know actual population sigma
    3.  use t values to calculate confidence interval for population mu
    Andejrad Ich

    0
    #134964

    Andejrad Ich
    Participant

    I’m going to guess, for example, you are looking at emergency room cycle time (time patient enters the door to time seen by physician).  If so, you already have a mess on your hands (because emergency room patients SHOULD be prioritized by severity of injury). 
    Once categorized (e.g., urgent need in one group, minor cuts/scrapes/bruises/twisted ankles in one group, runny noses in one group, etc.), then simply X, mR control chart EACH GROUP on a separate chart, including everything that walks in the door starting on day X until day Y.  The point at which the limits on each chart don’t seem to be changing anymore means you have captured the common cause variation associated with that group.  That will establish your baseline of expected, current state variation in cycle time.  It won’t be a number like n=32.  It’s the point at which you seem to have captured what walks through the door in that category.  When the limits aren’t changing any more, stop updating the chart;  you have the info you were after. 
    Andejrad Ich

    0
    #134768

    Andejrad Ich
    Participant

    I am NOT in construction (so feel free to disregard any/all).
    My first thought was this:  small company? Phase I — are your cost estimating/bid prep/bid delivery, materials procurement, materials delivery, inventory, job scheduling, manpower allocation, billing, and payroll processes error-free?  (not what you were thinking, I’ll bet)   Map those, determine how errors can occur and put preventions in place. 
    Once those are nailed down, THEN you might consider looking at the processes of hanging and taping and mudding, etc.   
    Andejrad Ich

    0
    #134765

    Andejrad Ich
    Participant

    Chris,
    I’ll imagine we are talking about a multi-cavity mold tool.  If you have USL and LSL for individuals coming out of that tool and your Ppk is 2.0 or greater, then do you really care if there are differences (even statistically significant differences) between cavities?  I mean….so what?  Lets say your capability-by-cavity analysis reveals differences — will you retool/adjust the troublesome cavities until they fall in line?  Will you block off those cavities?  Again, the answer is….as long as the overall population of output product is highly capable, then you will run the tool as is and the effort to run/track/chart individual analyses by cavity is really of no value whatsoever. 
    If, instead of Ppk of 2.0 or greater, you are in a marginal capability situation and you NEED to identify and eliminate sources of variation, then you may logically conduct a capability-by-cavity study (then you still have the problem of what to do with the information once you have it).  The only other time(s) you might want to bother with such an analysis would be in initial acceptance/qualification of a new tool or to check what you believe may be a worn tool (but you will STILL have the problem of what of any use do/can you do with the information).
    Generally, a capability study should be of the overall output of the 24 cavities (e.g.) all producing output with each machine cycle.  That is really the population of parts being piled up at the output of the process.  Is that output capable or not?  Variation showing up in samples of 5 parts collected (for charting) from the pile generated by 24 cavities will be captured/reflected as a source of normal variation already reflected by the chart limits. 
    Andejrad Ich

    0
    #134400

    Andejrad Ich
    Participant

    Amber,
    It’s a calculation of binomial probability.  Again, the link below saves you from churning the actual calculation using factorials.
    http://www.maxim-ic.com/tools/calculators/index.cfm/path/qa/calc_id/ltpd
    Andejrad Ich

    0
    #134320

    Andejrad Ich
    Participant

    Amber,
    Okay – I had a few minutes — I would make this an iterative sampling plan:
    Collect and review a sample of 60.  If finding 0 defects in 60, then you are 95% confident no more than 5% defects exists in your population of transactions. 
    If finding 1 defect in those 60, then collect an additional 35 (N=95) and review those.  If finding no additional defects, then you can still conclude with 95% confidence no greater than 5% exist in your population.
    If finding 1 defect in these additional 35 (you now have found a total of 2), then collect another 32 (N now = 127).  If finding no others (still totalling 2 defects in 127), then you can STILL conclude with 95% confidence no greater than 5% exist in your population. 
    If finding 1 defect in these additional 32, then collect an additional 29.  If finding no more than 3 defects in the sample now N=156, you can conclude with 95% confidence that no greater than 5% of your population is defective. 
    Summarized:
    0 in 60 = pass
    1 in 95 = pass
    2 in 127 = pass
    3 in 156 = pass
    Andejrad Ich

    0
    #134318

    Andejrad Ich
    Participant

    Amber,
    Below is a reasonable tool to play around with numbers to arrive at the sort of answer/sampling plan/accept-reject plan you are looking for.
    http://www.maxim-ic.com/tools/calculators/index.cfm/path/qa/calc_id/ltpd
    Andejrad Ich

    0
    #134317

    Andejrad Ich
    Participant

    Amber,
    But….additionally for clarification, let’s say the reviewer looks at 73 randomly drawn samples.  There is still the missing piece of meeting or not meeting the 95% accuracy requirement based upon the review of those 73 samples.  It would be incorrect to conclude (as might at first seem reasonable) that the population complies with the requirement if no more than 5% of the 73 are inaccurate (i.e., no more than 3 defective in the sample). 
    This is really a binomial application (each transaction is accurate or not accurate).  As such (I’m going to just jump to the answer here), you are able to conclude (with 95% confidence) that the population of transactions has no more than 5% defects if a sample of 60 contains ZERO defects.  You can bump that to 99% confidence in a conclusion of no more than 5% defects in your population if finding zero defects in a sample of 100 transactions.  If you find even one defect in either of these samples, then all bets are off;  you have to then calculate confidence intervals and such a calculation for either sample will result in a very real possibility of the population having greater than 5% defective transactions. 
    Andejrad Ich

    0
    #134230

    Andejrad Ich
    Participant

    …pick up any published work on measurement system analysis/gage R&R and you will find a precise grouping of accurate answers.  Not one (let me check that……yeah, not even one) will describe/define “precision” as being conceptually the same as “accuracy.”  If your stated references do in fact equate these two completely different concepts (i.e., you haven’t simply misinterpreted them), then they are wrong. 
    Happy Friday,
    Andejrad Ich
     

    0
    #134228

    Andejrad Ich
    Participant

    alpha error is the probability of concluding significant difference (i.e., rejecting the null hypothesis) when a significant difference does not actually exist.
    beta error is the probability of concluding no significant difference (i.e., accepting the null hypothesis) when a significant difference DOES actually exist.
    When you conduct a test and the test fails (i.e., you reject Ho), count yourself lucky because you know your sample size was adequate and you know what you can report and with what confidence (1-alpha).  When you conduct a test and the test PASSES, (you conclude no significant difference) you have to scratch your head for minute and determine with what confidence you can conclude “no difference” (1-beta) — because it could be that your sample simply wasn’t large enough to demonstrate “significance.”  Confidence in “no difference” is based upon the relationship between sigma and the number of samples in the test.  So, to answer your original question, beta is NOT 1-alpha (unless as a complete coincidence).
    Andejrad Ich

    0
    #134188

    Andejrad Ich
    Participant

    Belgus,
    Take for example, Peppe’s response;  it is inaccurate (in that it expresses the wrong answer).  If he were to respond several times with similar responses all worded slightly differently, he would demonstrate precision (in that they are tightly hover around the same answer) while still being inaccurate (in that all of those similar answers would still be wrong).
    Andejrad Ich

    0
    #134181

    Andejrad Ich
    Participant
    #134176

    Andejrad Ich
    Participant

    Belgus,
    Take for example, Peppe’s response;  it is inaccurate (in that it expresses the wrong answer).  If he were to respond several times with similar responses all worded slightly differently, he would demonstrate precision (in that they are tightly hover around the same answer) while still being inaccurate (in that all of those similar answers would still be wrong).
    Andejrad Ich

    0
    #133953

    Andejrad Ich
    Participant

    “I want to be able to prove mathematically that it will not be possible to get to 100% to pass” — Phil, 2006
    A clear statement of intent to misuse math to prove why something canNOT be done (you may as well have added, “That will show them!” or “See, I did this math and see we’re ALWAYS going to produce defects…so there!” 
    Six Sigma is about using analysis to eliminate defects.  As such, your posted question is COMPLETELY absurd here.  Okay, so the website I referenced is fictitious, but I’m sure there are plenty of real sites out there just gushing with mathematical justifications and excuses for inaction or with compelling arguments for people like you to present to your customer explaining why he/she is wasting your time expecting the product they buy from you to contain zero defects.  I suggest you try http://www.customersaresuchdolts.com. (Six Sigma simply isn’t at all the right place to come if you’re just looking for excuses).
    Andejrad Ich
     

    0
    #133908

    Andejrad Ich
    Participant

    Hey, sorry, but you accidentally posted your question on a six sigma site.  You clearly are looking for http://www.iamwillingtodoanythingbutfixmyprocess.com.  Try there and good luck.

    0
    #133836

    Andejrad Ich
    Participant

    Read “Economic Control Limits” at the site below:
    http://www.infinityqs.com/nav-4-2.asp

    0
    #133834

    Andejrad Ich
    Participant

    I have never ever ever EVER seen a process in a real factory in which the true mu did not float somewhat (so don’t be concerned by that even though Shewhart defined a process “in control” to have only constant mu & sigma). 
    SO…given that mu float DOES exist in real factories (versus textbooks), the question becomes “how much float is too much float?”  The answer is, “When the mu approaches so close to the spec limit that the tail of the distribution allows too many individuals to be out of specification.”  The approach to establishing control charts based upon this called “Acceptance Control Charting” by ASQ.  You may also find other references using the term “Economic Control Charting” or “Economic Control Limits.” 
    Also remember that whether the sample average is inside or outside the control limits (regardless of what method may have been used to establish those limits) is only ONE small test of the many tests you should be considering to be of equal value in monitoring your process (i.e., watching for runs, trends, etc.). 
    Andejrad Ich

    0
    #133215

    Andejrad Ich
    Participant

    Solving as a binom prob was my first approach too.  But this really is a finite population of 42 pieces, making it a perfect conditional probability problem. 
    Andejrad

    0
    #133214

    Andejrad Ich
    Participant

    1.  Honestly, I can’t believe such a thing does (or can) exist; everything about Minitab is clumsy and counter-intuitive.  I have the rath & strong minitab guide, but it ventures into only the simplest of applications (it would be useful for only a short time).
    2.  Having said that, if you spend the time (i.e., months) to slog through the very thorough Minitab help file as each question/application comes up, you will have at your disposal perhaps the best analytical tool available (…but you’ll never be quite convinced that you have it mastered). 

    0
Viewing 100 posts - 1 through 100 (of 182 total)