iSixSigma

Lorax

Forum Replies Created

Forum Replies Created

Viewing 66 posts - 1 through 66 (of 66 total)
  • Author
    Posts
  • #186452

    Lorax
    Participant

    Thanks for replying Robert
    Yeah, everything is different. Excel is saying:
    y = -0.0099x + 0.6148
    with an r-squared of 0.0538
    Minitab is talking about:
    y =  13.58 – 5.404x
    with a r-squared of  0.053
    When I apply a polynomial the equations are still different although the r-squared stays relatively consistent.

     

    0
    #62447

    Lorax
    Participant

    I can’t send you it unless I have your e-mail address.

    0
    #184530

    Lorax
    Participant

    Thanks folks,
    I’m leaning more and more toward a GLM to give me the answer as to which Potential Xs are most influential.
    Lorax

    0
    #184393

    Lorax
    Participant

    Thanks Remi, some good thoughts.
    You are right about the possible existence of interactions. I don’t know if a matrix plot is going to help get round this though.
    What needs to be done is the reduction of a big list of potential Xs down to a smaller list which includes only the “juiciest” ones (those which have the biggest impact and are conceivably possible to control or manage).
    Perhaps I should hold off analyzing anything until I get all the data and then do some sort of analysis of them all against the Y at that point (your GLM idea or a multiple regression).
    Thanks again
    Lorax
     

    0
    #184380

    Lorax
    Participant

    DPMO = Defects Per Million Opportunities
    If a single unit only offers the chance to foul up once then Opportunities=Units.
    If each unit offers a number of (proven) chances that could be fouled up on then Opportunities > Units
    It looks like your option 2 is more correct. You could divide a invoice into bits – so address of customer, arithmetic of the financial bit, timeliness and so on. You could then call each one an opportunity?
    Take the route that best suits your companies needs
    Lorax

     

    0
    #184379

    Lorax
    Participant

    Lean allows you to fix problems for which the knowledge of the solution exists somewhere in the organization.
    Six Sigma gets the numbers to tell you what is going on, rather than people so it is not as dependant on “tribal knowledge
    For sure there is a lot of discussion on this topic. Perhaps have a look back at some earlier posts on the same subject.
    Lorax

    0
    #184240

    Lorax
    Participant

    Thanks. Ideal.
    You dont know what you dont know hu?

    0
    #62404

    Lorax
    Participant

    Yep,
    We’ve done lots.
    I’ll pull somthing together and fire it through. You still at the same address?
    Lorax

    0
    #62395

    Lorax
    Participant

    Sure. What’s your e-mail adress?

    0
    #62368

    Lorax
    Participant

    Mornin Robert,
    Thanks for this.
    Yes I plotted the data first
    A quadratic seems to be the best fit. I tried with a few other lines and the all either had a poor r2 (adj) or gave a very small increase in it as a result of making the equation more complex so I stuck with quadratic
    The residual pattern looked kinda odd. The variation in residuals got bigger as the LOS got bigger. I’m surmising that this is because there is more opportunity to extend a patient’s stay in hospital, the longer they are here.
    It’s good to hear that 30% is a lot get for a single factor. It’s a rough and ready measure of the patient’s clinical complexity and does not take into account interactions between things that are wrong (eg anemia interacting with diabetes…)
    I really appreciate input on this.
    Lorax
     

    0
    #62365

    Lorax
    Participant

    Your thoughts are very much appreciated.
    I’m trying to assess how much impact Patient Clinical Complexity has on Length of Stay (LOS).
    I messed around with the data and found that a quadratic regression equation fitted best. (Stat>Regression>Fitted Line plot)
    My R squared adjusted number is 30.8% which seems to be saying that there is Complexity is a factor in LOS but is most likely not the only one (which stands to reason).
    The next thing I was doing was looking at the SS column of numbers on the ANOVA table and concluding from comparing the values for the “Complexity” and for “Error” that Complexity is responsible for determining about 30% of the LOS for a patient.
    The following is the Session Window:
     Polynomial Regression Analysis: Acutelos versus Unweighted Complexity Score
     The regression equation is
    Acutelos = 4.138 + 0.2462 Unweighted Complexity Score  + 0.2968 Unweighted Complexity Score**2
     
    S = 7.10199     R-Sqr = 30.8   R-Sqr(adj)= 30.8 
     
    Analysis of Variance
    Source        DF      SS       MS       F      P
    Regression     2   50444  25221.8  500.05  0.000
    Error       2245  113234     50.4
    Total       2247  163677
     
     
    What was calculated was that this Complexity measure is contributing 50444 to the variation
    Other stuff is contributing 113234
    This works out to be:
    Complexity causes about 30% of the variation in ALOS
     
    I’m at adamjlennoxhotmail.com if I can give you more details

    0
    #183230

    Lorax
    Participant

    It sounds like a hypothesis test of some sort is needed.
    The specific one you use will depend on the type of data you have for each thing that you are trying to compare:

    Is the data normally distributed
    Does the data have outliers

    There are flow charts available everywhere on the internet which guide you to the most appropriate test, for comparing means, or comparing variances of the two groups.
    The size of difference detectable between the two groups will be very much impacted by your sample size. Read up on it before starting anything.
     
    Lorax
     

    0
    #62363

    Lorax
    Participant

    The annotated session window output isn’t showing up properly. I’m going to see if I can attach it.
    Lorax
     

    0
    #183228

    Lorax
    Participant

    The picture was an image of the stats print out but dosent seem to be displaying. I’m going to try to attach it.
    Lorax

    0
    #62178

    Lorax
    Participant

    Fair point. Just look at the deltas away from the institution’s ALOS.
    The trouble comes when you try to do a reality check and let’s say (cue made up numbers) the ALOS is 3 days.
    Most people will look at this and say something along the lines of the last patient they saw  was at 30 days and that 3 could not possibly be correct. Then they will ponder it for a while and end up with the thought that 3 could be right if you took into account all patients.
    The trouble is that there is an immense amount of variation around that 3. Some patients have a far higher LOS (ALC?) and some far lower.
    I’m thinking that it might be so noisy that it is of no use for process improvement and that something like Conservable Days, because it takes into consideration CMGs and patient complexity and all, makes a better indicator – its variance indicating the effect of good things done to the process and the effect of the not-so-good too.
     

    0
    #176132

    Lorax
    Participant

    Thanks

    0
    #61775

    Lorax
    Participant

    Mike,
     
    Any sampling will lead to an amount of error.
    The $60,000,000 question is how much error can be tolerated in this specific situation?
     
    Is it absolutely necessary, to know to the penny, how much over or underpayment happened in the period? (if so you had better go though every single claim and get a number)
     
    If on the other hand you are trying to roughly assess the size of the underpayment problem to the nearest million dollars, then some approximations are tolerable.
     
    The calculation that you posted doesn’t seem awful if this is the case. If you pick a time period other than 15 months, the number of claims made in that period in the calculation will change (so the 24400 will vary) and compensate.
     
    Lorax

    0
    #157869

    Lorax
    Participant

    Good thoughts annon. I’ve had a bunch of advice on this one and this is what it seems to boil down to:
     
    The purpose of a person whose job is process improvement is to improve processes. They should spend their time either “Following-Up”  on past projects or starting new ones, whatever will bring the greatest good to the organization.
     
    A MBB or the initiative champion is the person who can make this call. Someone with a “Big-Picture” of what is going on. The individual BB doesn’t have too much say in the matter.
     
    Lorax

    0
    #147089

    Lorax
    Participant

    Thanks folks.
    Lorax

    0
    #146930

    Lorax
    Participant

    Hey Darth,
    I know this is an old post but we’re starting to raise similar questions here. I remembered this thread and…
    The indicator I’m thinking of using is the “ease of getting resources for new projects”.
    If the getting commitment-of-peoples-time-to-work-on-stuff-which-is-important is becoming like drawing wisdom teeth, then the organization is at the limit of its current capacity.
    Lorax

    0
    #142587

    Lorax
    Participant

    Indeed.
    We won’t see the full benefits which 6S helps to deliver by just sending a couple of people but surely there are some – esp. for the individuals who go.
     
    Lorax
     

    0
    #61064

    Lorax
    Participant

    Nice point re Bed Turns. Thanks.
    The thing that is giving me heartburn right now is the thought that nothing seems to measure the amount of effort which people are putting in to the existing flow rate.
    If our flow measurement for current state is 95 (made-up number), then we could say that at least 15 of this is due to particular staff members putting in efforts that are well above and beyond that which  can realistically be expected from a person.
    I wonder if some sort of metric around staff turnover would help, or in fact if these people are going to continue putting in these huge efforts, regardless of improvements are carried out on the system.
    Lorax

    0
    #142006

    Lorax
    Participant

    Have a look through the items in the blue bar to the left of your screen sir.
    Lorax

    0
    #61058

    Lorax
    Participant

    Also,
     
    LOS may not be directly (or clearly) effected by cohorting but the Dr would have more time to spend with patients as opposed to walking great distances when they are dispersed (ie more Value Added (VA) time and less wasted effort).
     
    You may not see Length Of Stay (LOS) decreasing (its one of those numbers that is difficult to see move), but the quality of care would likely improve and you wouldn’t have to listen to people complaining that they were sure the could discharge patient X but couldn’t get the Dr to write the order because they were at the other end of the hospital.
     
    I think this is at least 3 cents worth now…

    0
    #141083

    Lorax
    Participant

    Thanks folks.

    0
    #141025

    Lorax
    Participant

    Sorry, clunky borrowed computer…

    0
    #139769

    Lorax
    Participant

    tottow,
    Another way to do it is to say that such a small sample doesn’t give a representative picture of the population.
    You could illustrate by drawing a few different distributions: A fat normal one (no smart comments please), a thin normal and a left skewed. Then randomly pick a too-small sample from each and show that you get a distribution shape way different from the parent population.
    Lorax

    0
    #60972

    Lorax
    Participant

    Not exactly what you are looking for, but Brit gathered some information using this forum. Take a look at the following thread:
    http://healthcare.isixsigma.com/forum/showthread.asp?messageID=1797
    Lorax

    0
    #60968

    Lorax
    Participant

    A quick question:
    Is a Healthcare background essential to functioning as a Black Belt in a hospital or could someone from another sector do it (assuming they had decent 6S experience).
    What extra challenges would they face? Are there any advantages?

    0
    #60967

    Lorax
    Participant

    Trish,
    I don’t know what your new opportunity involves but in making the move back to the wild I was really surprised at how little of the 6S stuff I used. The most useful thing was to deal with problems using the DMAIC structure. Define clearly what the issue was first…
    That and assess the measurement system before you start believing its results (top tip for this: measure one thing ten times – the variation you get gives a real “quick and dirty” feel for how the thing is working).
    Good luck and keep us posted on progress.
    Lorax
     
    PS
    Ihi.org may be a good resource

    0
    #60965

    Lorax
    Participant

    BB,
    Canada eh?
    Me too.
    What’s your e-mail address?
    Lorax

    0
    #139359

    Lorax
    Participant

    MB,
    Simplest way to do it:
    Cut the numbers out of Minitab, stick them in Excel, format as you need and then stick them back into Minitab
    Lorax

    0
    #139289

    Lorax
    Participant

    I’ll agree with Lass on this one, with one proviso:
    If you are comparing a sample to a single value to determine if the means are the same or different,
    and
    n>30
    Then use the One-sample-Z (which is pronounced “Zed” by the way).
    If
    If you are comparing a sample to a single value to determine if the means are the same or different,
    and
    n<30
    and
    The population is normally distributed
    Then
    Use one-sample-t
     
    Lorax
     

    0
    #139275

    Lorax
    Participant

    You too Oh Dark Lord
    Lorax

    0
    #139257

    Lorax
    Participant

    Start at the beginning.
    Gather a team of people who know area – local area experts. They will provide the local area expertise to solve the problem. The BB is the person with the tools, not all the answers.
    Just make sure in the first team meeting to identify the issue and get everyone wanting to solve it. Life gets easier after that (sort of).
    In saying all that, look into inventory turns.
    Lorax

    0
    #139251

    Lorax
    Participant

    Andy/IE,
    Be careful not to alienate the materials manager. Unless he is going to get removed in the very near future (unlikely), the probability is that they will be a major player in any solution. The (second) last thing you want to do is to make them oppose any change that is proposed.
    He is a yahoo – yes, but one who is in a position which can make this considerably easier or considerably more difficult. Its better to make an ally than an enemy (thanks RW).
    In saying that, if it is unavoidable, squish him if he’s stonewalling progress.
    Lorax
     

    0
    #139133

    Lorax
    Participant

    You could do a hyp test comparing the before and after situations to prove the difference is real or not. Then to highlight the step change, recalculate the average, UCL and LCL at the point when the alteration to the process would have taken effect.
    Lorax
     

    0
    #139128

    Lorax
    Participant

    3rd attempt.
    Short answer – yes
    There is a splendid graph that you can plot (it’s an option available from the Attribute Agreement command). Numbers are also available in the session window. Have a play to find it.
     
    Lorax 

     

    0
    #139125

    Lorax
    Participant

    Second attempt.
    I’ve asked the administrator to attach the graph file.
    Lorax

    0
    #139124

    Lorax
    Participant

    I’ve included a in this post that you can generate from the “Attribute Agreement” menu option. Each red line represents an op.
     
    Lorax

    0
    #139121

    Lorax
    Participant

    Amazon second hand-bookstore can turn up some hard-to-find things too. Just watch that you are getting a recent edition.
    Lorax
     

    0
    #139119

    Lorax
    Participant

    Time to annoy HornJM:
    The 10% on the burr, is there a +/- tolerance (actual or implied)? If so, stick it into the Minitab R&R calc and see how the comparator does against that. It may be that the choice of parts on the initial R&R was misrepresenting the part-to-part variation and the comparator is actually capable of spotting the difference between conformant and non conformant parts but not of differentiating between one part and the next as they come off the press.
    What does the comparator compare? Weights, dimensions?
    Lorax
     

    0
    #139114

    Lorax
    Participant

    Weight?

    0
    #139113

    Lorax
    Participant

    Julie,
     With that sort of setup you will be able to run an “Attribute Agreement” analysis for sure (the old Minitab rev 13 attribute R&R).
    What are the other pieces of information that you need to get from the study?
    Lorax
    (Am off to harangue Quentin now)

    0
    #139060

    Lorax
    Participant

    Lots are available on the web for free. It takes a bit of searching to find one that fits your needs though.
    Lorax

    0
    #139041

    Lorax
    Participant

    Applets?
    You would just need Java which is free.
    Lorax

    0
    #139040

    Lorax
    Participant

    Julie,
     
    The nicest way I’ve seen this done was along the lines of:
    Take 60 parts (more is better – if you can afford the time) with the full range of defects present, get a recognized authority (as close to the customer as possible) to identify what is wrong with each, do a single R&R inspection exercise where the victim (?!) ops identify the type of defect seen (3 reps each op) – note what the op sees for each part, and then do an R&R calculation for each type of defect.
    I found most success communicating the results in terms of “false alarms” (the identification of a defect which didn’t exist) and “misses” (the non-identification of defects which did exist). Kappa is a little remote from reality to be understood quickly by some audiences.
    You can then express the results by defect (we can correctly identify X% of defect type YY)
    Or
    By inspector (Jim misses 50% of all defects)
    Go slowly on the planning of this. It’s great when it works but it’s all too easy to miss something and have your results scrapped.
     
    Sorry, this has been a super-fast response. Post and tell me if I should be clearer and/or more detailed.
     
    Lorax

    0
    #138990

    Lorax
    Participant

    Ahh.
    Now I understand Mike’s comment re Quentin’s post…
    Lorax

    0
    #138966

    Lorax
    Participant

    That sounds fair.
    I guess looking at the “% Study Variation” also has the advantages of being right beside “% Tolerance” (as long as you have put in the tolerance gap). So that the most appropriate assessment of the variation coming from the measurement system can be picked.
    Thanks
    Lorax

    0
    #138928

    Lorax
    Participant

    KhalilS,
     
    Cool your jets!
    Don’t get offended at any pointed or slightly sharp replies you get on this forum.
    Both Mike and Darth are very experienced in terms of Six Sigma. It looked like you were intending to sell your 6S services as a newly minted Black Belt immediately after receiving your certification. That peeves people who have sweated blood in industry to build up their experience to a level where they can effectively help others.
     
    To quote (roughly) from this forum, it appeared that you were asking how to do brain surgery 15mins before going into the op. room, from people who had spent half their life studying and practicing it.
     
    The best way to get constructive information from this web site is to start by doing some research yourself (the blue column on the left of the screen and past postings are great sources) and ask non-run-of-the-mill questions.
     
    A Six Sigma project is not an academic exercise. It is a fight with a tough opponent during which you will get hurt. Book learning with all its merits can’t prepare you fully to coach others.
     
    Good Luck
     
    Lorax

    0
    #138569

    Lorax
    Participant

    http://www.goalqpv.com
    For the a nice one on tools: The Six Sigma Memory Jogger (ISBN 1-57681-044-5)
    I’d also recomend the Rath & Strong’s Six Sigma Team Pocket Guide (ISBN 0-07-141756-7)
    http://www.books.mcgraw-hill.com 

    0
    #138434

    Lorax
    Participant

    Andy,
    Absolutely! Pedantism exists in 6S for sure.
    It’s both a good and a bad thing.
    The good about it results in people requiring knowledge of the measurement system before allowing a project to progress, or demanding data in order to backup/validate someone’s opinion, or going slowly and step-by-step through a lengthy calculation or process…
    The bad produces those nit-picking arguments which can only result in +/- 0.01% difference in the result of a R&R or Regression etc. when the decision as to how best proceed is as clear as a bell.
    I’d agree that ego or some form of lack-of-security often drives the majority of the bad stuff, esp. where the aim of the discussion is to demonstrate greater knowledge or accuracy than an “opponent”
    The 6S methodology kinda encourages it. You give a person a ton of training and then ask them to do a high priority task. Their training gives them an amount of authority; one way to challenge this is to attack the detail.
    Maybe being pedantic and anal is a trait to be encouraged in detail-critical-professions and that’s behind the approach being taken by the question writers?
    Lorax

    0
    #137738

    Lorax
    Participant

    Thanks folks.
    Lorax

    0
    #137394

    Lorax
    Participant

    John,
     
    In answer to the original question…
     
    Agreed. One way to see how the installation of Six Sigma to a particular organization is going is to do some surveys, constructing and analyzing the results in such a way that you are able to derive information which tells you that things are generally thought to be going OK or otherwise.
     
    And/Or.
     
    A group of measures, if well put together, will yield good information too. They may also give an indication as to which particular area is causing a problem. The best grouping of these I’ve seen tracked the following over time:

    Number of candidate projects ready to be started
    The total money saved due to 6S projects
    Number of trained BBs
    Number of projects closed
    There were a total of six.
     
    This may be another way to do it
    Lorax

    0
    #137392

    Lorax
    Participant

    Wilson,
     
    As irritating and cramping as rules are, they were initially laid down by a person with (generally) good intentions.
    The mandatory use of Define type tools – like Process Mapping, C&E Matrix & FEMA, is not a terrible thing. The more the ground is prepared and known-about, the easier it is to drive over in the MAIC steps: you get to learn about the area-of-focus plus get the team to start pulling together.
     
    In saying all that, an edict that says you must use Hyp tests or DOEs or any of the other tools normally used later in the DMAIC structure, is a bit daft but I’m sure you can get round the rule-maker by giving reasons why it isn’t an appropriate tool to use and offering up something else which produces comparable results.
     
    Lorax

    0
    #136356

    Lorax
    Participant

    kPa=kilopascal a measure of pressure
    KPI = Key Performance Indicator

    0
    #136211

    Lorax
    Participant

    I’ll go with those. They are a bit more eloquent than the “fitness for use” taught many moons ago.
    Lorax

    0
    #136143

    Lorax
    Participant

    If it’s essential Six Sigma deals with defects, then would it not be valid to adjust the definition of a defect so that you are able to focus on many possible facets?
     
    So as an example, in the case where you wanted to increase the manufacturing capacity of a particular production cell, if you define a defect as:
    “Any shift which produces less than 3,000 units”
    Then focus on eliminating those defects…
     
    The only problem being that this is not cleanly customer focused.

    0
    #136038

    Lorax
    Participant

    Ahh, this old chestnut. When to recalculate the control limits.
    Have a look at forum discussion “SPC Question”
    Posting 86768 by Vinod was the last time this topic was kicked around.
     
    Lorax

    0
    #136006

    Lorax
    Participant

    Bezuzal,
    Pick the suggestion that best suits your specific situation.
    Lorax
     

    0
    #136002

    Lorax
    Participant

    Thars sommat wrong ‘ere (in best Yorkshire accent).
    The control limits used should be the same regardless of whether you are looking at a chart which looks at a month, or a chart which looks at a week IF the charts are made in the same way.
    If your month chart uses different subgrouping to your week chart then the Control limits will be different.
    Lorax

    0
    #135994

    Lorax
    Participant

    How about an HR Dept?
    Their processing of incoming resumes will (if the organization is large) involve scanning of the documents, identification of key words and then the appropriate routing for the candidate.
    Lorax
     

    0
    #135989

    Lorax
    Participant

    Brad,
    Cool thing: page 60(ish) of the AIAG MSA book. I’m a bit fuzzy on the details and don’t have the book handy so look at the material before proceeding.
    Take one item, measure one particular dimension 10 times. Calculate the standard deviation of the resulting 10 numbers, multiply this value by 2 (let’s call the resulting value X) and use it to illustrate the variation in measurement you can expect to get (expressed by measurement-value +/-X).
    You could do this for a number of different surface finishes, distances or whatever else you know to effect the measurement system and then use the number(s) which you think are most appropriate.
    You could also include or exclude the effect of removing and then replacing the subject item, rebooting the measurement system… between each of the 10 measurements.
    This might be a good way to express how “good” your measurement systems are
    Lorax
     

    0
    #135269

    Lorax
    Participant

    That isn’t good if my posting was difficult to understand. Sorry. I’m constantly trying to speak and write as clearly as possible. And I clearly failed in that instance. My fault.
    You have hit the nail on the head.
    1.)        If an R&R is to be executed which is intended to compare the amount-of-variation-coming-from-the-measurement-system to the amount-of-variation-coming-from-the-process, then the whole thing hangs on the sampling from the process. If huge variation between the samples exists (eg if different part numbers are included in the samples), then the R&R will probably look fantastic because the measurement system variation will be tiny compared to the big difference between the samples. For one of these R&Rs to be properly understood, the sampling needs to be detailed – perhaps in a variation map. Lots of fairly pointless work lies down this route unless you absolutely need to know how good the gauge is at seeing the variation in the process.
    2.)        The cleaner way to do an R&R (in this case) is to compare the amount-of-variation-coming-from-the-measurement-system to the gap between the tolerances. In Minitab set up for a crossed R&R as per normal and then hit Options and go into the Process tolerance field and enter the distance between your upper spec and your lower. The R&R analysis will execute as usual but you will have one additional column which details how the gauge does in comparison to the tolerance.
    My suggestion is to do this (1.) rather than get into all the tricksy stuff that can go on with (2.). You will get an answer which most closely fits the question that you were asked by your customer.
    Lorax

    0
    #135268

    Lorax
    Participant

    Ahha.
    The one with the contradictory R-squared and delta-from-reality has got a measuring system which, while adequate for discriminating between good and bad product, has a limited resolution.
    Problem solved?
     

    0
    #133795

    Lorax
    Participant

    $

    0
Viewing 66 posts - 1 through 66 (of 66 total)