iSixSigma

McD

Forum Replies Created

Forum Replies Created

Viewing 100 posts - 1 through 100 (of 214 total)
  • Author
    Posts
  • #64917

    McD
    Participant

    Manjunath
    Implementing a full-blown Six Sigma program takes organizational commitment from the top.  It also takes a lot of guidance, especially in the software business.  If you are especially looking for quick wins, and it sounds like you are, then the approach is highly dependent on your particular shop, thus, it is even more important that you get good guidance.  Unfortunately, that guidance doesn’t come cheap. Good Six Sigma consultants are expensive.
    That being said.  If you think you cannot afford to invest the sort of effort needed to do it right, you can still get some gains by looking at where your pain points are.
    Take a look at your shop and it’s practices, and see how they compare to the best shops.  Are you plagued by shipped defects?  Perhaps implementing Fagan inspections, if you are not doing so already, can give you a big boost.  That is one area where you can often make significant gains quickly.
    Requirements tends to be a very big hitter, but since the effort comes long before the result, it takes a little more perseverence.  David Hallowell has a number of excellent articles on this site about getting the requirements right, and deciding which requirements to implement. (Caution: Dave is a principal in SSAI, an outfit I occasionally consult for, so you might consider how objective my view would be).
    Many of the things you might do aren’t all that different from what you might do in a non-SS environment. The magic in Six Sigma is that you pay attention to the data, going after places where you can get the wins, and actually measuring the results.
    To get the most benefit, you need commitment from the top.  While short term projects can produce great benefits, simply getting that top level commitment is likely to take considerable time.
    Chances are, there are things your shop does well, and things your shop does poorly.  The trick is to recognize which is which, and go after those improvement opportunities without impacting the things you already do well.  Very likely, you would have already improved those things you do poorly if you could recognize them.  Again,if you want large gains, you need to enlist help, and recognize that great rewards don’t always come easily or cheaply.
     

    0
    #64802

    McD
    Participant

    A really good project would be to figure out a way to balance those without enough projects with those who have too many!
    –McD
     

    0
    #64792

    McD
    Participant

    You don’t have to do VOC by showing customers a prototype or mockup.  Ask the questions, especially the penetrating questions.  Take a look on this site for KJ.
    QFD is a wonderful tool for larger projects. Be careful with it though, it is easy to spend a terrific amount of time.
    But in any case, the time to do these things was long ago.
     

    0
    #64788

    McD
    Participant

    In addition, the Six Sigma Advantage web site contains a fairly lengthy list of papers on application of Six Sigma to I/T, including a number of case studies.
    –McD

    0
    #64781

    McD
    Participant

    As for measuring reviewer effectiveness, it sounds like you have to do a Gage R&R.
    I’d be a little careful on this.  When you have something like a document that is quite unique, and more importantly, recognizable as an indivdual, it is pretty impractical to do a Gage R&R.  This does not mean, however, that you shouldn’t ask the questions.  It is important to understand just how good your measurement system can be expected to perform.
    Here’s the problem … if you hand a tester a dozen bolts one at a time, but within that, you hand him the same bolt twice, he won’t recognize it as the same bolt and he certainly won’t remember his previous measurement, so you can consider the replicate to be a new measurement.
    That isn’t practical with a document.  When you hand the reviewer the same document a second time, he will immediately recognize it as a document he reviewed before, and will probably also remember what he found, so you can’t count on it being a new measurement.  You are simply writing down the same result a second time.
    This is a similar problem to software inspection.  You can’t really do a complete Gage R&R, so you need to do what you can, and ask penetrating questions to gain some confidence in your measurement system.
     
     

    0
    #64776

    McD
    Participant

    In addition, a lot of companies consider their Six Sigma program part of their “competitive advantage”, and only release very limited information.
    –McD
     

    0
    #155772

    McD
    Participant

    TC is absolutely correct with his “it depends” answer, but depending on context, one thing you should be aware of.  To many managers, schedule is a more reliable estimate of cost than cost estimates, so managers will sometimes behave as if schedule is more important.
    But from a SS perspective, TC’s “ask your customer” is the right answer!
    –McD
     

    0
    #64702

    McD
    Participant

    Ovidu has it right, you really need to understand what is behind the lean equations.
    The manufacturing stuff is actually pretty applicable; you just need to open your mind up to apply the same thinking to the I/T space.  Understand the 7 wastes, map your process, and then go after eliminating those wastes.
    In Six Sigma, we tend to go after the defects.  Often, we think of a process taking too much time as a defect, but our thinking is about reducing defects.  Lean, in contrast, tends to go after time (at least as applied to I/T).  Defects are one of the wastes, but not the only one.
    But at the core, it starts with your process map.  Once you understand what you really are doing, you can start to design what you ought to be doing.
    –McD
     

    0
    #151881

    McD
    Participant

    A year is a pretty good realization period.  A quarterly or monthly session between the BB and the process owner should generally keep things on track, then a serious review of the financials by the controller’s department at the end of the realization period.
    I think most places aren’t likely to keep quite a close a watch on Green Belt projects.  From what I’ve seen, there is an anticipated savings before something becomes a Black Belt project, but then is everyone agrees to reduced savings, it is still scrutinized the same as any other BB project.
    The one exception is that projects that are meaningful (maybe a million) are sometimes audited by the corporate (external) auditor.  This adds credibility to the effort.
    –McD
     

    0
    #64632

    McD
    Participant

    This is a common problem in I/T, but rarely is it as serious as perceived.  If you want to improve your score card for validation testing, clearly you have an existing process that you want to improve.  Until you have determined that the current process is so badly broken it has to be thrown out, then you have a DMAIC.
    In Define you will put a stake in the ground for the score you are looking for.  In Measure, you will look at where you are now, and whether that score is reasonable to achieve.  Now, if you need, say, a three sigma improvement, then there are good odds that you need to totally trash your current process and start with a clean sheet of paper.  That is when you pull out the DFSS club.  But until then, assume you don’t have to throw the baby out with the bath water.
    You will need to collect data, perhaps a lot. You may find that you need to do a project to put the data collection in place before you can actually start your project.  But don’t overlook data you already have.  Odds are you have some history on your score card data.  Chances are, there is a lot of data available from which you can deduce those things you would like to know.  And don’t forget your accounting system.  That is often a gold mine for the Black Belt.  If you can get a good cost accountant, the guy who know where all the bodies are buried, then by all means, grab him.  These are the guys who know what you can learn from that accounting system.  Be tenacious in going after the data.
    –McD
     

    0
    #64631

    McD
    Participant

    If you step back a bit and think about the “Six Sigma approach”, it really is not much more than a somewhat systematized view of engineering common sense.  Smaller projects are improvement projects, and the Six Sigma improvement process, DMAIC, when viewed at a high enough level, is nothing more than a classical engineer would do:

    Define – Let’s first agree on what we want to accomplish
    Measure – Be sure we can recognize success
    Analyze – Figure out what “knobs” we have to turn
    Improve – Figure out what knobs we are going to turn
    Control – Make sure the knob stays turned
    In I/T, of course, our “knobs” are rarely physical knobs, but other things we can influence.  But really, isn’t this how an engineer would approach any problem?  Just because it is an I/T problem, that is no reason to get sloppy.
    The other dimensions that make Six Sigma successful are a slavish dedication to the facts, something which again, any engineer should do, and a passion for making the data visual.  This last part is something that engineers tend not to do very well, but certainly, in a class project, is going to make your message much more memorable.
    –McD
     

    0
    #151497

    McD
    Participant

    SEH — funny you should ask.  I actually wrote the paper I spoke of, and it hung around while I waited for a few people to review it.  After a frantic January, I am now coming up for air and trying to decide whether it goes on my web site, or I try to figure out how to put it on this site.
    I never did come up with a template, and frankly, I’m not so sure a template is maybe even a bad thing for an MGPP.
    –McD
     

    0
    #64629

    McD
    Participant

    I think far too many people focus on Six Sigma tools, and overlook the underlying methodology and philosophies.
    Clearly, many tools regularly used in development have their place in a Six Sigma effort, and indeed, thinking about the objectives may well lead you to rely more heavily on tools that you may have used less frequently in the past.
    In software, capturing the customer’s needs is the thing we do most poorly, so while something like a “Feature Prioritization List” may be interesting, it is probably lighter weight than what is really needed.  If you don’t use something like a use case model or functional decomp, how are you going to ensure that your requirements flow down to the software artifacts?  But notice how they might be strengthened by following more traditional SS tools like QFD and the Pugh matrix.
    As mentioned in another thread, when we are doing improvement efforts, things like defect densities and escape rates are probably more useful, and certainly more relevant, than some more traditional SS metrics like DPMO and sigma level.
    Keep in mind that the Black Belt certificate doesn’t license you to check your common sense at the door.  Understand your objectives, certainly follow the process, but use whatever tools are at your disposal.
    –McD
     

    0
    #64628

    McD
    Participant

    You might also consider grazing the papers on Six Sigma Advantage’s web site.  They focus on teaching SS as applied to software.  Many of their papers are also on this website.
    –McD
     

    0
    #64627

    McD
    Participant

    > Reducing defects is appropriate even for a service industry.
    I would say especially for a service industry
    –McD
     

    0
    #64609

    McD
    Participant

    DPMO for software is a kind of a tough nut.  Generally, you would like the “opportunity” to be defined in some meaningful way, but also in a way that leads to credible values.  You also want a metric that is reasonably reproduceable and realistic to produce.
    Defining a line of code as an opportunity will lead to very high sigma values.  Since software is perceived as being unreliable, these very high sigma values will be taken by pretty much everyone as being meaningless.  Lines of code are also something that are not visible to the customer, so are not really in keeping with the philosophy of satisfying the customer.  Further, in the early stages of the process where the most critical work occurs, there are no lines of code to count, so almost by definition DPMOs decrease as the project proceeds.
    Another view is to take a requirement as an opportunity.  If you fail to meet the requirement, you have failed to satisfy the customer, and that is your opportunity.  This can be hard because satisfying a requirement is often difficult to measure objectively.
    Still another view is to take the application as an opportunity.  After all, the application is the chance you have to interact with your customer, and whether his experience is good or bad is really what matters.  Obviously, this doesn’t take into account large versus small applications, and puts a lot of pressure on to categorize defects as somehow not counting.
    There has been a fair bit of work around defining opportunities in terms of size.  Some believe that the most useful approach is to count something like five opportunities per function point.  Although this isn’t terribly satisfying in some ways, it does permit meaningful, objective DPMO measurements.
    –McD
     

    0
    #64608

    McD
    Participant

    The most effective Black Belt certification programs generally require some sort of validated savings from the Belt before certification.  I would suggest something similar for Lean (although I personally prefer the approach of includint Lean training in the BB program).
    The operative word is validated. Get your controller’s department involved to ensure that claimed savings are real.  This will not only keep the projects honest, but it will improve the credibility of the program.  In some programs, projects claiming very large benefits, say, over a million, are audited by an external firm.
    –McD
     
     

    0
    #146698

    McD
    Participant

    One process I have seen work quite well:
    1. An online intro to Six Sigma that everyone takes – pretty short, an hour or two
    2. An online Green Belt overview that each Green Belt takes just before starting the first project – a little more in depth, perhaps 10-20 hours
    3. A slide deck for each tool that the Black Belt delivers to the Green Belt, or sometimes the project team, just before the tool is used
    As others have pointed out, it is critical that the GB have a BB to mentor.  Having the already prepared training deck for the BB helps his time a lot and ensures that everyone gets the same training.
    It is also critical that time is allocated for the online training.  If you have a detailed time tracking system, make sure that there is a charge number allocated for the training, and that the GBs know they are expected to do the training and charge the time.  If you do any sort of time planning, this needs to be in their capacity budget.  If you expect them to spend 10 hours training this week, then they not only need the 10 hours of budget, but they need 10 hours less of other work.
    One of the differences between a BB and a GB is that a GB knows some of the tools, and a BB knows them all.  The reality is that some BBs tend to be stronger in some tools than others.  The availability of the training deck helps the BB hone his skills with tools that he hasn’t had as much of an opportunity to exploit.
    But the mentoring is key, and with 2000 GBs you better have a pile of BBs
    –McD
     

    0
    #64463

    McD
    Participant

    Ryan
    A large part of the questions is how low is low.
    When a Black Belt undertakes a project in a low maturity organization, getting measurements tends to be the big problem.  Software organizations in particular tend to resist measurement, so the culture of the organization can be a large barrier.
    SS is also very process oriented. In many low maturity software shops the process is ill-defined or poorly understood.  In these shops getting a handle on the process can be a challenge.
    If the belt can get the necessary measurement systems in place, then SS can be a potent tool in improving the organization’s maturity.
    –McD

    0
    #64462

    McD
    Participant

    Will,
    You realize you are replying to a two year old posting?
    Dave Hallowell (who happens to work with Bruce) did post a pretty decent article on KJ in the library on this site:
    http://software.isixsigma.com/library/content/c050622b.asp
    Hope this helps
    –McD
     

    0
    #142792

    McD
    Participant

    I’m not so sure I’d be quite that hard on it … well, the “drivel” part I agree with.
    *BUT*, if the Six Sigma program is sponsored by the “quality” department, then almost everything in his left hand column will probably be a characteristic of the SS program.  Sadly, that is true too often.
    There have been plenty of programs where there tend to be localized solutions, unsustained gains, cowardly approaches that won’t challenge how things are done, one-time improvements; all these are symptoms of failing to get real commitment from the top.
    I don’t know from experience, but I suspect that a Lean initiative driven by operations (somehow it doesn’t fit well with quality), and without top management commitment, would have similar limitations.
    –McD
     

    0
    #64429

    McD
    Participant

    Sara
    RF makes some good points, but let me expand a bit.
    There are a few flavors of DFSS, although at the end of the day they are pretty much the same.  IDOV, DMEDI and DMADV differ really only in what the proponents want to advertise.  Let’s talk about DMADV, since getting started, this is perhaps the easiest.
    The first two phases, Define and Measure, are really no different in goal to the same phases in DMAIC.  I like to think of Define as “Let’s all agree on what we are trying to accomplish”, and Measure as “Let’s make sure we know what success smells like”.  Sure, since DFSS is typically aimed at a much larger return than DMAIC, the intensity with which we go after these two phases is significantly greater, the overall objective is no different.
    You mention a number of the various steps in your process, but nowhere do you talk about what you are trying to accomplish from a high level.  Think about the overall objective of contract management in your organization.  It is going to be different than it is in some other organization.
    Once you have a clear picture, then understand what is good and bad about the current process.  This takes input from all levels of management, and maybe a lot more.  While in DMAIC, this may involve nothing more than discussions with the process owner, in DFSS this can be a pretty big deal, depending on the size of the benefit we hope to achieve.  Often it will take some research, perhaps benchmarking other companies, perhaps focus groups with our customers.  We want to really understand what we might accomplish.
    Once we think we have a goal, we now need to get all levels of management behind us. DFSS projects generally take a lot of effort, and expect to return huge gains.  If the function is expected to be outsourced in a few years, then perhaps understanding the process is more important than streamlining it.
    At the end of Define, you want to have a clear picture of your objective, and agreement on what you are trying to accomplish.
    Measure is similar; you want validation that you can tell whether you have accomplished what you set out to do, and like Define, it tends to involve a lot more intensity and interaction with customers than the corresponding DMAIC step.  However, coming out of Measure you will have a very clear picture of where you want to go (or IF you want to go there).
    So yes, DFSS is very applicable to designing transactional processes, maybe more applicable than it is to designing manufacturing processes.
    –McD
     

    0
    #64426

    McD
    Participant

    “I would suggest that unless you are in Quality/Testing, it may not be as useful a tool for you”
    Boy do I disagree with that!  Within the larger I/T context, testing is probably the place with the fewest opportunities (not that there isn’t plenty of hay to be made there)
    As best I can tell, the greatest opportunities are in development.  What happens at the beginning of a project has the greatest impact on life cycle costs, so fixing the up front mistakes tends to have the greatest benefit.
    Support is also an area with plenty of opportunity.  Rarely is support well tuned to the need.  Often things are done for historical reasons that waste resources, and badly needed things aren’t done, also usually for historical reasons.
    We can often find ways to test with less effort, or more effectiveness, but this typically doesn’t have the impact of fixing the problem up front.  Not that there isn’t money to be made here, but usually the opportunity isn’t as large, and usually testing isn’t as badly broken as development, either.
    –McD

    0
    #64417

    McD
    Participant

    Mukesh
    Be a little careful in looking at tools.  Six Sigma is more about the process than the tools, and although most Six SIgma tools are applicable in software, and generally superior to the traditional software tools, there are areas where tools less commonly used in manufacturing are especially valuable in software.
    Further, don’t get DMAIC and DFSS confused. In general (very general), you would use DMAIC to improve your development process and DFSS to design the software.  This doesn’t mean that you might not have a process so bad that you need a clean sheet of paper, and thus would choose DFSS for your process, nor does it exclude the possibility of using DMAIC to improve a piece of software, but these are the exceptions.
    If you think about a new piece of software, problem one is understanding the customer needs.  We have some traditional tools that work reasonably well, and QFD can help us translate the user needs into software features.  But often our understanding of the user needs is weak.  A recent posting spoke of “empathy” as a high prority requirement.  Clearly, that sort of requirement tells us nothing.  Tools like KJ can help us get a deep understanding of what the user is trying to tell us.  More traditional market research tools such as focus groups and surveys shouldn’t be ignored either, but those are often best done by market research organizations that have expertise in the area.
    Once we have taken the user needs and translated them into features perhaps using QFD, the Pugh matrix is a common way to make a feature selection. However, for a more complex situation the Pugh matrix can be a little bit of a problem.
    If we use conjoint analysis to understand the Kano sensitivities, and understand software sizing, we can model the feature-customer satisfaction space, and make data-based decisions on features instead of just putting a finger in the air.
    At this point, we can create our DFSS scorecard and then we can vet future design decisions against their effect on the scorecard, again allowing us to move our decision making in a more data based direction.
    By maintaining our traceability of requirements down to code, perhaps with the help of a little more QFD, we can track whether the feature costs we expected can be maintained, and if course corrections are needed, we can again make defensible decisions.
    As we get down into testing, this traceability helps us understand out coverage of what is important to the user, and of course, DOE, as well as a number of other tools, can often be helpful in developing our test strategy.
    Much of this relies on sizing, which is a place where we tend to be weak in software, but in manufacturing, it is so obvious it doesn’t even bear mention. A successful Six Sigma practitioner in software needs a deep understanding of software sizing, and a commitment to tracking the product size.
    At the end of the day, building software really isn’t all that different from building a car. Once we start thinking about the problem in a systematic way, and with a DFSS prejudice, we find that most of the tools are applicable, and where they aren’t, there is some software analogue that accomplishes the same goal.
    –McD
     

    0
    #64416

    McD
    Participant

    Ram
    There are many organizations that will certify you, and they are quite uneven I’m afraid. What you do with the certification is really what matters.  An insightful employer will ask you about your projects to see to what extent you really embraced the methodology.  When you are looking for employment in Six Sigma, generally the certification will get you the interview, but your past results will get you the job.  So it is best if the certifying organization requires a project or two, and you can find good, meaningful, projects.
    If your intent is to use Six Sigma tools at your present organization, and you have the ability to influence the organization, then getting relevant training is perhaps even more important.
    Not very many organizations provide training in Six Sigma specifically oriented toward software. The basics of Six Sigma, of course, apply to any discipline, but for transactional disciplines in general, and software in particular, it can be difficult to take manufacturing examples and see how they relate.  So if the trainers have good, relevant, examples, you are going to have an easier time understanding how they apply.
    The other problem IT belts have is that the technology behind the software process is really not all that well understood by most practitioners. A person can be a Java or C++ expert, and have very little understanding of the dynamic of the software process.  Some Six Sigma for software trainers expose you to some of this technology, and show how you apply it in your Six Sigma processes. Again, this can be very valuable in helping you get off to a quick start in improving your organization.
    Some of these outfits occasionally have open enrollment classes, but it is relatively unusual.  A more common approach is for a company to hire an organization to train their engineers.  But open enrollment classes do happen, you just may need to be patient to find one in your area (who wants to travel in this day!)
    From what I can tell, Six Sigma Advantage has a leg up in this area, but let me quickly admit that I occasionally train for them, so it is perhaps a little selfish to recommend them.  But there are few choices in the IT area.
    –McD
     

    0
    #64387

    McD
    Participant

    At the end of the day, you would like to constrain warranty costs, so a comparison of warranty costs would be the ideal approach.  However, this is generally the longest feedback, so more commonly, teams compare phase containment against adjustments in the process.
    As far as I have seen, “reviews” generally don’t produce very satisfying results.  Typically it can be difficult to measure the improvement.  However, inspections can often produce dramatic results.  Some of the more or less standard rules around inspections have been developed by adjusting inspection parameters and analyzing the resultant containment rates.
    You won’t find a lot of case studies out there because companies who have made dramatic inroads on their warranty costs are often embarrased to admit how bad they were.
    –McD
     

    0
    #64386

    McD
    Participant

    In the sort of “stereotypical” case, QFD can help us understand the priority of specific product characteristics as driven by prioritized customer needs.  See http://www.is-sixsigma.com/index.shtml?ID=36 for an example, click on “ATV Measure” on the left.
    Later on, we can use QFD to ensure coverage of those product characteristics by our design elements, and if we so desire, we can continue that process downward to whatever level of detail we want.  I don’t have any good software examples of this, but automotive examples abound.
    QFD can also be used in gap analysis, to help us understand where the product, or our develpment process, may be falling short.
    One issue with QFD is that it can take an enormous amount of time, and can sometimes reap very few benefits.  It is easy for a Black Belt to get enamoured of the pretty charts and loose sight of the objective.  Unfortunately, the only cure I know of for this is experience. QFD can also yeild great rewards, so it isn’t a tool you really want to ignore.
    I would caution you to pay attention to the time you spend and have a clear vision of the outcome you expect.
    –McD
     

    0
    #140957

    McD
    Participant

    They’re only the same size if your experience is limited to vending machines.  I often pick up the little cans for when I just want to wet my whistle before bed and the regular cans are too much.  The other day I was going for a long drive and my wife picked up a tall can for me.  Too much – gets warm before you can drink it.
    –McD
     

    0
    #140955

    McD
    Participant

    “you will also need to give some serious thought to the physical meaning of the equations”
    I would argue that this is the thing to do first.  At it’s best, regression is no more than an educated guess, supported by some observation.  If you have a first principles reason to suspect some relationship, accomodate that first.
    For some reason, people operating any process always believe that their process is somehow special.  But the reality is that all processes must follow the laws of physics.  Take into account the things you know first.  There really isn’t much to be gained by rediscovering ancient laws.  Once you have removed the known effects, now surprises in the data aren’t masked by potentially large influences of the basic science of the problem.
    Now that you have a model that describes how you are different from what you should expect, once again follow Robert’s advice and understand what the model is telling you. If something is very counterintuitive, it probably indicates that there is something going on you don’t understand.  Approach it with an open mind, but don’t be willing to easily be led off into the weeds.  Understand what it means, and dig into the mechanism.
    Remember, regression can give us insight into the problem, and can point in the direction of causes.  But it can never prove a cause by itself.  It can only show relationships.  If a relationship is suspect, suspect it!
    –McD
     
     

    0
    #139205

    McD
    Participant

    I think you want to be a little careful in throwing around terms like ‘bug’ and ‘defect’ as they relate to software.  Software types have very specific meaning for the term ‘defect’, and Six Sigma software shops count defects, most of which (hopefully) never reach the customer.
    In software development, a failure to meet requirements (which for some operations might be some internal requirement) is called an error.  An error which escapes the phase is called a defect, and a defect which escapes to the customer is called a released defect.  Managing these escape rates is an important aspect of software development, and a place where Six Sigma can be, and has been, applied with considerable success.  Avoiding the errors in the first place has tended to be somewhat less successful.
    Bug is an imprecise term which generally isn’t used by shops which are paying attention to quality.  It tends to be more in vogue among garage shops and kids playing in the basement.
    And I am absolutely at odds with your statement “I also disagree that the use of Six Sigma can offer much towards the ‘creation’of robust software development”
    Software development is a process, like any other process, and like any other process, it can be improved.  The way we improve processes is with Six Sigma.
    But it goes beyond that.  Software developers use DFSS as their process for developing robust software.  DFSS is quite similar to the traditional SDLC, but offers more robust tools at every step along the way.
    Certainly different tools are used in software development with a different frequency than in manufacturing, just as different tools tend to be used in any transactional process.
    For example a gage R&R is often impossible to perform for many transactional measurements. Get over it, move on.  On the other hand, tools like ABC/BOC or logistic regression tend to be used more frequently.  But the process is the same.
    I might even go so far as to argue that the opportunities are not nearly as rampant in ‘testing’ as they are in development.  The money lying around in development waiting to be picked up is absolutely amazing.
    –McD
     
     

    0
    #138819

    McD
    Participant

    I was going to send you off to the blue bar on the left, but a quick look turned up very little.
    A control plan is a plan you put in place to ensure that the savings generated by the project actaully materialize.  In the course of a DMAIC project, you have made changes to the process.  The control plan attempts to keep those changes in place.
    The actual nature of the control plan can vary quite widely depending on the project.  In a manufacturing environment, you might implement control charts to track critical variables.  In a transactional project, there may well be systems in place that can identify variation.
    The key issue is that the control plan is used by the process owner to make sure that the process doesn’t fall back to its pre-project state.
    –McD
     

    0
    #138815

    McD
    Participant

    I wouldn’t be quite so hard on them.  Many organizations have Green Belts select projects.  Remember, GB projects tend to be somewhat smaller than Black Belt projects, and it might not make sense for the leadership to spend their time on smaller projects.
    It also may be viewed as a development opportunity for the Green Belt.  The GB might just ask around for how to identify a project, and someone might suggest that he look to his upstream and downstream customers.  A motivated GB might just then interview his customers, and in the process, develop a much better picture of the service his organization provides.
    I see this as especially useful in the case of a software tester. Testers are often a little isolated from the development organization as a whole, and it can be difficult for them to see the value they bring to the organization.
    And of course, having been the one who recognized the need, the Green Belt is going to be highly committed to the project.
    However, all this doesn’t absolve the Black Belt of the responsibility to properly mentor the GB in project selection, as well as execution.
    –McD
     

    0
    #138812

    McD
    Participant

    This would depend on your company’s particular accounting rules.  In general, I would presume the commission to be part of the sales, and so would need to be deducted from the savings.  However, your accounting system may already have dealt with that some other way.
    Different companies have different ways of dealing with things.  In companies where labor represents a tiny portion of the cost, labor is often ignored or counted as some fixed percentage, because actually tracking the time can be a significant cost.  In some products, though, labor is practically the entire cost.  Counting raw materials might not be worthwhile in those cases.
    So talk to your accountants.  It is going to depend on your company’s particular situation.
    –McD
     

    0
    #138811

    McD
    Participant

    My weasel words were carefully chosen.
    All other things being equal, generally one would prefer to deploy Black Belts on hard savings projects.  Thousands of hours have been wasted on projects that sounded good at the time.  SS is all about measuring, and if you can’t measure the results, then why do it.
    That being said, SS also needs to be aligned with the business strategy.  There may be initiatives that need to be undertaken to execute the business strategy whose savings are hard to pin down.  If they need to be done, and SS is the right tool, then it is silly to stand by some rule.  Do what makes sense.
    In some cases there may be few hard savings projects available, or soft savings projects need to be executed in order to facilitate later hard savings projects.  Of course do them.
    At the end of the day, you should do what makes sense.  But without some other compelling driver, you want to sped your valuable resources where they will do the most good, and bottom-line impact is usually better than “feel good”.
    –McD

    0
    #138651

    McD
    Participant

    Khalil:
    customer surveys didn’t reveal any major ‘points of pain’
    losing market share to another competitor
    OK, this is a horse of a different color.  And if, as Mike suggests, much of this is window dressing, you are in for a tough road.  As a Green Belt in a company without a strong SS culture, you are in for an uphill battle already.  Hopefully your management is open minded and interested in improving the institution.
    A simple survey to your customers isn’t going to do it for you.  Someone out there is doing a better job than you.  At best, you would like to gain back some of that market share.  As a minimum, you want to improve inefficient processes without accelerating the loss of market share.
    What you need to do is to get at the shape of your Kano curves.  The common way to do this is with a conjoint analysis.  That sounds pretty simple, but in practice, especially with customer-facing processes, it is a big, expensive deal.  If you want meaningful results you will probably need to use some sort of market research firm, and that stuff can get expensive.
    Once you know what the Kano curves look like, it will be pretty evident what to go after.
    But unless you are content with continuing to loose customers, I can’t see a lot of alternatives.  If you don’t know what they care about, then even seemingly harmless improvements could hurt badly.  And obviously they care about something, because they are going somewhere else to get it.
    –McD
     

    0
    #138612

    McD
    Participant

    Mike
    I wasn’t trying to sound offended by your input.  More data is always a good thing.
    I think your comment about “What you learned is still in your head” is right on target.  R just needs a little help remembering.  Alternative sources can’t help but be better.
    –McD
     

    0
    #138606

    McD
    Participant

    I think the difference was the makeup of the students.
    I suspect that might well be a common problem.  I do a Black Belt week 4 for selected engineers from a large company.  The last session I did was wonderful — I had a group of smart, motivated, engaged belts.
    The group before them was not well targeted for the course, and not especially motivated.  It was all I could do to make it bearable.
    If a single company, where you would expect some consistency in culture (especially since this was a company with a well-entrenched Six Sigma culture), can be this variable, it would seem to be even more difficult if you are looking at an outside provider where your MBB candidates will make up only a small part of the group.
    –McD

    0
    #138605

    McD
    Participant

    Mike
    I imagine there are a hundred sources to tickle the old gray cells.  The Memory Jogger was the first one that came to mind.  And it seemed to fit my mental picture of R, which is likely not terribly accurate since it is based on an extremely limited sample.
    –McD

    0
    #138567

    McD
    Participant

    For the customer facing processes, identify first what customer needs
    Sheesh.  I can’t believe romel is the first one in this thread to mention the customer.
    The customer stuff is a lot harder to get a handle on than the costs, but it is key to the bank’s success.  Certainly look at costs, and look at processes that have variation, but don’t forget what your customer needs.
    –McD
     

    0
    #138566

    McD
    Participant

    We all realize that soft savings really does provide a benefit, but the Executive Team does not want us to include it as “real” savings.
    This isn’t uncommon, and it might not be such a bad stance, depending on the company, its history, and the availability of hard savings to go after.
    A lot of companies have a long history of “improvement” projects that sounded good at the time, but may have produced questionable results.  While soft savings may, in fact, be real, they are soft basically because you can’t measure them. And if you can’t measure them, then perhaps it isn’t such a bad plan to avoid spending Black Belts on them.
    If a company has opportunities for hard savings that have a measureable effect on the bottom line, then surely that is a better place to spend the company’s valuable resources.
    –McD

    0
    #138565

    McD
    Participant

    “you have a $60000 hard savings”
    Actually, it isn’t hard until it comes off the books, so it stays soft until the payroll goes down that amount.  Simply redeploying the resource doesn’t result in hard savings.
    –McD
     

    0
    #138541

    McD
    Participant

    R
    Presumably your training material included a roadmap or similar overall view.  Review that with a particular eye toward remembering what the objectives were of each phase.
    Don’t fixate on the tools, there are hundreds of them and any project will only use a few.  There is a series of little “Memory Jogger” books out there, I can’t recall who publishes them, but they are handy to help remember what tools you learned about.  Actually using the tools will probably require going back to the books, but the memory jogger is a compact way to recall what they were.
    Unfortunately, many programs really focus on the trees and spend precious little time on the forest.  Remind yourself about the overall process and focus on that. Once you have the right focus, application of a little common sense will lead you to the correct details.
    –McD
     

    0
    #138481

    McD
    Participant

    I’d go easy on the kid.  He studied at UofM, worked in the auto industry, probably still in Michigan.
    In recent years we in Michigan have had a governer who has been doing a stellar job of protecting us in Michigan from the recent recovery afflicting much of the nation.
    He has spent his entire working life in a place where the only people who haven’t left are either retired, or are tired old auto workers who think the UAW will save them from the evils of the economy.
    Since his sample apparently only included the badly-broken U.S. auto industry, I wouldn’t be surprised if his observations weren’t far from the mark.
    Admittedly, I have only had a couple of brushes with the industry, one ancient, the other recent.  In both cases I came away astonished at just how dysfunctional the industry seems to be.
    –McD
     

    0
    #64325

    McD
    Participant

    Boy, I’m sort of on-board with Bob here.
    If you already suspect that the software isn’t working well for the business, then the use cases might validate that, but they won’t give you any insights.
    However, if you do the analysis of the current business process, that will give you the background you need to do a conceptual data model for the business.  Then you can compare that to the data model for the software, and see whether it makes sense.
    If the model underlying the software can meet the basic data needs of the business, then either software modification/configuration or busness process improvements might be feasible.
    However, if the data needs of the business are dramatically different from what the software provides, then getting the software to play nice with the business is going to be pretty much a start from scratch exercies.
    But as Bob61 says, it has to start with the business process model.  In some organizations, the BB’s are the only people who know how to do this.  In other organizations, IT has the expertise.
    –McD
     

    0
    #64298

    McD
    Participant

    Triz out of the box, of course, is pretty hard to apply to software.  The basic “inventive principles” have some applicability, if you open your mind wide enough.  Kevin Rea has done some work on recasting Altshuller’s work in software terms.
    I teach a segment on Triz in a software SS course, I do, and frankly, I find the work done so far a little less than satisfying.  I think there is some potential, but it has yet to be articulated in a way that makes it as powerful as it might be.
    Personally, I’m inclined to take the Gang of Four work as a very much more applicable example of the general thought process.
    –McD
     

    0
    #135528

    McD
    Participant

    In contrast, I am aware of a company that uses DFSS as the “chain saw” to make radical modifications to their business processes to eliminate just this sort of thing.
    If the VP’s aren’t brain dead, they recognize these situations, and if they are bought in to SS, they tend to be just the folks to instigate a project to fix it.
    These tend to be pretty weighty teams.  Generally the project involves several MBB’s, and very senior folks are available as SME’s.  I’ve seen some with very few BB’s, and some with a small army.
    But the thing that makes it work is a mandate from the top management to make it work, and the top brass need to have a lot of confidence in their MBB’s.
    The veeps also need to have enough confidence to know that if the chain saw should whack their department, they will still land on their feet,
    –McD

    0
    #135310

    McD
    Participant

    Allen
    You might consider looking more toward consulting in a business/transactional context.  My experience might be a little slanted, since I deal primarily with technical people, usually engineers.  But I find that Black Belt candidates respond best to anecdotes based on experience, so without a lot of projects under your belt, it can be hard to be a credible trainer.
    That being said, many programs (IMO) are a little weak on the change management aspects of SS.  I think there is an opportunity there to ratchet up the capability of Black Belts.
    Your PhD in statistics could be a burden.  Black Belts use statistics, a lot.  But the operative word here is use.  They really don’t get very deeply into the theory, no more than you get into the theory of those detonations happening hundreds of times a second on your morning drive to work. On the other hand, Black Belts do often need an expert to consult when the going gets a little tough for them. But for the routine training, BBs need to learn how to use a handful of tools, and who to ask when they need more.
    –McD

    0
    #64275

    McD
    Participant

    Six Sigma is a process improvement methodology, and doesn’t concern itself with domain specifics, such as how one might define various metrics for demand in a call center context.  What you discuss is pretty specific to a very narrow context.  Moreover, with different business objectives, you may find some specific academic definition lacking.  The Black Belt will use a metric that makes sense for the particular problem.
    In some corners, Seddon is considered somewhat controversial, so I wouldn’t consider a case study that might choose to use some other definition as “flawed”.
    One also needs to be a little careful when reading any “case study” about Six Sigma. Most companies consider their Six Sigma activities a competitive advantage, and are generally unwilling to share them.  As a result, the case studies you find are often written by consultants combining the results of several projects into a generic “case” so as to protect client trade secrets.  Generally these are written to highlight the Six Sigma aspects of the project, and are likely to water down the domain specifics somewhat.
    –McD
     

    0
    #135250

    McD
    Participant

    1 – Probably this isn’t the place to get your homework done for you
    2 – This probably belongs on the software forum
    3 – If you are a practitioner, rather than a student, and you don’t know the answer to that you have much bigger problems to deal with.
    –McD
     

    0
    #135249

    McD
    Participant

    TQM is an old, failed quality program.
    Six Sigma is a process improvement methodology.
    They have nothing to do with each other.
    Well, unless you are in one of those rare companies where TQM was successful.  Then Six Sigma is an obvious next step.
    –McD

    0
    #64274

    McD
    Participant

    A software process is just like any other process.  First you build a process map.  They you identify performance metrics, then the possible X’s.  Then you build your model.
    Pick a problem process.  Agree on an improvement target.  Then go run DMAIC and the model will be part of the project.
    At a much higher level, there are models in the literature for various aspects of software development.  Most of these are not terribly useful until they are calibrated to your organization, but it many cases they can serve as a sanity check on what you find, or as insight into what your X’s might be.
    The problem in software development is more often getting the data.  Software companies are notorious for running by the seat of their pants, and software developers absolutely hate to be measured.  But if you want to develop useful models, then you will need the data.
    If you are a level 4 outfit, you likely will have much of the data and your folks will understand the need for the data.  But if you are level 1 hoping to get to level 4, you have a long road ahead, and getting data will be a difficult part of the journey.
    –McD
     

    0
    #135166

    McD
    Participant

    Ozgul
    I don’t have anything, and I doubt you will find anything.  Carnegie has all sorts of comparisions between ESCM and other assessment frameworks for other disciplines. But SS isn’t in the same category, so it’s kind of hard to compare.  At least you can sort of compare apples to oranges, but it gets tough to compare apples to jeeps.
    –McD

    0
    #135165

    McD
    Participant

    Gus
    Never did see your email.  I wonder if you sent it from some domain that is likely to have been blocked.
    –McD
     

    0
    #135120

    McD
    Participant

    Pretty different animals.
    eSCM-SP is a capability model, much like CMMi.  It describes characteristics you would like in a service provider.  It is all about the “what”.
    Six Sigma is a process/product improvement methodology.  It speaks to the “how”, rather than the what.
    An outsourcing provider might use SS to improve it’s eSCM-SP ranking.  Similarly, a Black Belt in a service provider might use eSCM-SP to gather insight into process improvement opportunities.
    They are quite different, and not at all competing.  Perhaps at right angles might be a better description.
    –McD
     

    0
    #135104

    McD
    Participant

    Motorola uses DSS (I think it stands for Digital Six Sigma) to describe their Black Belt program for software types.
    –McD
     

    0
    #135096

    McD
    Participant

    Multi-Generation Product/Project/Process Plan
    –McD
     

    0
    #135094

    McD
    Participant

    Gus
    Interesting timing.  I recently recognized that there is precious little information online about the MGPP and that has prompted me to consider writing a paper on the topic.  Haven’t done it yet — seems like there is plenty to do, but it is needed.
    I don’t have some killer template, but I have a couple of simple, completed MGPP’s that I would ba happy to share.  Nothing really earth-shattering, but maybe it could get you off dead center.
    I thought I usually did these in Excel, but the shareable ones I find are Powerpoint.  If this would help, email me at mcd at is-sixsigma dot com and I’ll send them along.
    –McD
     

    0
    #64272

    McD
    Participant

    I think this comes up once a week.
    The software development process is a process, and like any other process, it can stand to be improved.  And like any other process, the way to improve it is with DMAIC.
    But software development in particular is much closer to Six Sigma than many other activities.  The software development lifecycle mimics DFSS.  However, DFSS has many tools that are much more powerful than we have been accustomed to in software development.
    By integrating DFSS into our development process we have much better tools for eliciting and characterizing requirements.  Those well-characterized requirements can lead to business value models which allow us to make objective feature selections, and can be further applied down in design where business value can drive design decisions.
    Better yet, as we develop an understanding of our process capability, we can model our defect insertion and removal, and make decisions about just how many defects we are going to release.
    As we discover what we can, and can’t, do, and understand our own proces capability intimately, we can turn to DMAIC to ratchet up that process capability.
    So the points of application in development are limitless, but the precondition is that we are willing to turn our back on business as usual.
    –McD
     

    0
    #135045

    McD
    Participant

    It’s worse than destructive testing.  In most transactional processes, we can’t test identical samples.  Often our measurements involve activities performed by people, so if we tried to give them the same sample, they would recognize it and this would impact their response.  And typically, every individual case is significantly different.
    With transactional processes we often need to be satisfied with studying our measurement system and making subjective judgements about whether we can trust it, much as Ron described.  This is far from ideal, and when we can do an honest MSA we should.  But if we can’t, there is no sense beating ourselves up over it.
    Purists will insist you have to do a full blown gage R&R, but in many cases not only is this impossible, but it will do more harm than good, because it will distract us from carefully analyzing our measurement system.  That analysis might actually turn out to be useful, while in many transactional cases, the gage R&R will be nothing more than smoke.
    –McD
     

    0
    #64269

    McD
    Participant

    An off the shelf score card is a bad idea.  Your product is most likely not identical to somebody else’s.  People have reasons for buying your product, or for buying a competitor’s.
    When a product is designed with DFSS, the needs of the customer and the business are well documented and understood.  The effect of these needs is measured, and a model built from which the scorecard can be derived.  The model, and hence the scorecard, will be unique to each product.
    The business side tends to be easy because you are looking for bottom line costs or top line business growth.  There may be specifics, such as positioning your company in a particular market, but basically they all boil down to the same thing.
    The customer side is trickier.  Although it is easy to say “customer satisfaction”, measuring customer sat is tough, and tying it to measureable product attributes is even tougher.  This takes effort, and will obviously be very different for different products.
    So the short answer is — “there is no free lunch”
    –McD
     

    0
    #64268

    McD
    Participant

    A lot of work has been done in this area.  The main drivers are size, the maturity of the organization, schedule compression, and when defect removal is initiated.
    Look at Putnam to understand schedule compression and organizational maturity.  Capers Jones has some good data on the effectiveness of defect removal techniques.
    You also might want to understand the difference between reviews and inspections.  Reviews are not especially effective, and typically don’t have a measurable effect.
    Identifying some objective measure of “severity” of a defect is hard.  Probably the best metric is removal cost, and it is well documented that removal cost depends, more than anything else, on how long it took to discover the defect.
    Obviously, most of these parameters are controllable.  The size tends to be dictated by the project, but the schedule compression is basically a management decision. This is a very large driver, and unfortunately, most managers don’t realize that a decision to compress the schedule is a decision to increase the cost and increase the released defects, which has the added effect of increasing warranty costs.
    Process changes are obviously needed to move defect discovery earlier in the process, and to improve the organization’s maturity, but these are also controllable, although not as immediate as schedule compression. Effective inspections require thought and training, so although they are a very good way to improve containment, it is by no means an overnight thing.
    –McD
     

    0
    #134494

    McD
    Participant

    Although I might not express it so colorfully, I tend to agree with Darth on this one.  But it almost sounds as if the problem goes deeper than that. MBBs need a deep understanding of the methodology and tools so they can guide the BBs.  There seems to be a trend for some organizations to anoint managers as MBBs with no preparation.  This leaves the BBs without any real support, and is a symptom of a pretty severely broken deployment.
    If you can get away with firing the bums, as Darth suggests, then by all means.  But I suspect you may have a more difficult problem.
    –McD
     

    0
    #64249

    McD
    Participant

    Nancy
    What makes a good project is going to depend on your organization and its pain points.  Even the approach will depend on the sorts of applications you do.
    If, for example, you test products intended to run with a variety of configurations, you might consider using DOE ideas to reduce the number of test cases.
    If warranty costs are an issue, you might paw through the help desk logs looking for areas where testing could be strengthened.
    Perhaps your development process could benefit from moving the design of the test cases earlier in the process.
    What makes sense depends entirely on your organization.  Look at your organization, and pay attention to the problems, then pick one to fix.
    –McD
     

    0
    #64234

    McD
    Participant

    In Six Sigma usage, a defect is what the project defines it to be.  It doesn’t have to be at all related to a bug.
    In software test usage, a mistake is something that could compromise the product which was found and corrected by the person who made it.  If someone else in the phase finds it, it is an error.  If it escapes the phase, it is a defect.
    –McD
     

    0
    #64220

    McD
    Participant

    It’s not entirely unreasonable to use Excel to do AHP, although I can see where it could be messy in a facilitated session.
    Expert Choice is an AHP tool that looks as if it might work for your situation, but I’ve never actually used it, so it could be terrible.
    –McD
     

    0
    #64208

    McD
    Participant

    Jeff
    Niku had a nice package like that. They have since been bought up by CA.  I think the package is now called Clarify or something like that.  I seem to recall they had some nice templates for Six Sigma program tracking.
    Their project management, though, is a little different than MS Project.  It is better suited to large, relatively mature (in the CMM sense) shops.  It isn’t especially appealing to the lone ranger wanting to scope out a project and then ignore the plan (which seems to be the practice in our industry).
    I think Microsoft has something similar on their server-based version of Project (sorry, I don’t know what the name of the week is).
    Be advised, though, that implementing any sort of package like that is a pretty large effort if it is going to do you any good.  You will likely need to tailor your processes or the package or both.
    Any of these products is going to be expensive, but the implementation cost is going to make the software look free.  Be prepared to swallow hard.  On the other hand, the bennies can be huge.
    –McD
     

    0
    #133434

    McD
    Participant

    In many ways, the MGPP is a wonderful thing.  To begin with, though, the MGPP is often around some grand plan.  Gen 1 is simply the first step to some greater goal.
    The generations should be meaningful, and your first project should be around gen 1.  During the course of the project, plans for gen 2, gen 3, etc. will change.  That is one of the beauties of the MGPP.  Besides helping to set a framework for the gen 1 project, it also provides a sort of escape valve.  As gen 1 progresses, you will find some things you would like to do but really can’t without seriously comprimising the current project.  The other gens give you a roadmap to decide when is the right time to do that particular feature.
    Conversely, project teams come up with all sorts of great ideas that might not be in scope.  Looking to future generations helps decide whether a feature makes sense at all, and perhaps whether a technology originally slated for a future gen should be pulled forward to the current gen.
    You can get the most mileage out of DFSS when it is used to change the world.  Changing the world isn’t generally all that easy, and the MGPP provides a way to break it down into bite size pieces.
    I assume that your MGPP training included the obligatory reference to the moon shot.  Back in 1960, it was almost inconcievable that a man could be placed on the moon and safely returned.  So the project was broken into 3 generations. 1) Put a man in space (Mercury), 2) Put two men in orbit (Gemini) and 3) Put a man on the moon (Apollo).  Now, the technologies needed for each generation could reasonably be identified, and the projects launched to develop those technologies.
    Your MGPP should be similar.  Break the grand goal into a small number of reasonable but meaningful steps, understand the capabilities needed for each step, and then simply turn the crank.
    Bear in mind that the MGPP might dictate a significant number of daughter projects.  If the needed capability is entirely new, you spawn a DFSS project to develop the capability.  If you need to be a lot better at something you know how to do, you use a DMAIC, and if you simply need to deploy something you already can do, you just do it.
    –McD
     

    0
    #133432

    McD
    Participant

    Can you think of some alternate product which can be produced in  the existing facility which could be developed by some of the existing resources in your organization?
    While this may sound good, it is unlikely a solution.  When management is on a cost-cutting binge, it is generally a stop the bleeding sort of exercise.  A new product is a long term, extremely expensive exercise, and not likely to help the immediate problem.  Perhaps if there was another product the company already made, and that product was sold out, then this might make sense.
    Sometimes maintenance is an opportunity area, but one needs to be careful that a short term fix doesn’t turn into a long term problem.
    In a chemical plant, the opportunities will be in raw materials, energy, or capital.  If OP really can save money with manpower reduction then he is running a plant from another century.  Modern chemical plants pretty much run themselves and typically have only the tiniest crew.  The problem is more often finding something for the operators to do, because you don’t want to leave the plant unattended.
    Capital is a big deal in the chemical industry, but not likely to be a short term fix.  That pretty much leaves raw materials and energy, which means OP should look to yeilds.
    In my experience, almost every plant believes “my plant is different” and for some reason they believe that there is some sort of alchemy.  It seems universal that people believe their plant doesn’t follow the rules of chemistry.  And in my experience (and I’ve spent a lot of years in the industry, and worked in a lot of different processes), the plant always adheres to the same chemical engineering principles you learned in college.
    So dust off that old unit ops book, and see where you can make a yeild improvement.
    –McD
     

    0
    #133310

    McD
    Participant

    I’m a little with ramblinwreck on this one, at least start out by having a long heart-to-heart with the accountants.  In a continuous chemical process labor should be almost invisible, unless this is a 40’s style plant.  Find out where the money is, then go after that.
    If, in fact, labor is a big contribution, then unless you are making something that costs a penny a pound, it should be a piece of cake to take out a lot of cost.
    –McD
     

    0
    #64172

    McD
    Participant

    Willis
    Testing is often a pretty expensive way to improve TCE.  If you have a complex product, getting good coverage can be a problem (and expensive).
    You might consider working on your PCE’s with tools like Fagan Inspections which often prove to be more cost effective.
    –McD
     

    0
    #131462

    McD
    Participant

    I think I woiuld spin it a little differently.
    If I sample a population, and calculate a mean, the result is not necessarily the mean of the population, but rather, an estimate of the mean.  The 90% confidence interval is the range within which the mean lies with 90% confidence. You narrow the confidence interval by taking more samples.
    By probability, they are most likely looking for process capability.  That is, you will run your process such that 95% of the product is within spec.  This really constrains the setpoint you choose (mean) dependent on the dispersion of the process. If you can produce product smack on setpoint, then you can move the process mean closer to the specification, assuming that is a more economical thing to do.  But if you cannot control the process that tightly, then you must adjust the process mean farther from the specification to ensure that 95% of the product is within specification.
    Hope that helps a little
    –McD

    0
    #131122

    McD
    Participant

    The MGP is your friend.  Use the daylights out of it in a situation like this.
    I would try to do as thorough a VOC as possible.  Be sure you completely understand the lay of the land.  I would probably do an appropriate number of KJs to be sure you understand what it is like where the stakeholders live.  There will likely be plenty of unspoken requirements.  You need to understand these.
    Then analyze the VOC to death.  be sure you understand where the greatest value lies, and set your scope to something manageable, but still with plenty of bang for the buck.  If possible, engage the stakeholders in the scope setting exercise.  Use tools like AHP that are hard for them to game, but be sure that they are the ones figuring out where the business value lies.
    Now, revisit your MGP and be sure capabilities that are out of scope but are still important to some stakeholder group show up in some future generation.  Depending on what you have to do and what resources you have, it may be helpful to begin planning, or at least budgeting, for later generations even before gen 1 is done.
    You will have more luck with buy in to the extent that stakeholders see motion on their pet features, whether that motion is seeing something happen in the gen they want, or seeing necessary precursor capabilities in earlier generations.
    Keep the stakeholders part of the decision process, but be sure to manage it so it doesn’t become a free for all.  People will understand that some higher value feature comes first, especially if they were part of figuring out that value.
    –McD
     

    0
    #131114

    McD
    Participant

    Tina
    I might expand on Super-Intelligent’s reply
    Six Sigma is a process improvement methodology that uses a wide range of tools.  Many of the tools used are statistical, many are not.  Six Sigma practitioners use whatever tools are suitable for the job.
    One of the phases of the DMAIC process is called ‘Control’.  In the Control phase, especially manufacturing, SPC is frequently used as one of the strategies to ensure that whatever improvement the project deployed stays in place.
    There is no rule that says that SPC cannot be used elsewhere in a project, and there is no rule that says SPC has to be used in Control.  But it frequently turns out to be a good tool in that phase.
    –McD
     

    0
    #130922

    McD
    Participant

    Look at the software channel on this website, and on the Six Sigma Advantage website for I/T examples.
    Although I can’t cite chapter and verse, I think “procurement” projects have tended to be more successful when they consider themselves in the broader, “supply chain” context.
    I can tell you from experience that R&D is a bear.  There is plenty of opportunities, but most of the work that needs to be done (at least from what I’ve seen) tends to be cultural, and that is a hard row to hoe.
    Manufacturing is obvious, I think you put that there by mistake.
    I have no experience with SS in HR and training, but I see no reason they shouldn’t be like any other transactional process.
    –McD
     

    0
    #130921

    McD
    Participant

    There is a data quality consultant, Larry English, who has some very good data on the cost of poor data quality.  I don’t recall the name of his firm, but he used to have some good info on his web site.  Once you understand where data quality is costing you, then project selection should be a little easier.
    –McD
     

    0
    #64136

    McD
    Participant

    QFD can be used at a number of places in the process.  The use you describe is typical of the first house in a DFSS project.
    QFD at that stage can be a little risky in a software project.  For some projects, it can be amazingly valuable.  For others, you can consume huge quantities of time to no benefit.  It takes experience to know the difference.
    I have seen the ‘measures’ in the QFD actually be the requirements.  But in a DFSS project, those measures more often are input to the concept selection phase.  It is only after the concept selection, and sometimes another house or two, that you are in a position to write a formal SRS.
    –McD
     

    0
    #64130

    McD
    Participant

    Niki
    There are a number of articles right here on this site, as well as quite a few forum posts.  Try searching at the top of the screen.
    Also, I noticed the other day there are a few good articles about CMMi and SS on the Six Sigma Advantage web site.
    –McD
     

    0
    #130303

    McD
    Participant

    I think the Six Sigma view of the world is becoming a little more open.  Six Sigma is a very powerful approach to both process improvement and product/process development.  But just because Six Sigma is good doesn’t mean that practitioners of other approaches have no good ideas.  In recent years, Six Sigma practitioners have become more willing to steal good ideas from other disciplines.
    One very common place to go looking is Lean.  Often, Lean sorts of ideas emerge from DMAIC projects anyway.  But by looking at the Lean BoK a Black Belt can often get ideas as to places to look for improvement opportunities, and bypass a certain amount of discovery that he might well have done anyway, but perhaps not as quickly.
    In the software business, there is a framework called CMMI which outlines a number of practices that have been proven to be successful.  When a Black Belt goes looking to improve a software process, the CMMI is a handy place to go looking for ideas.
    Many Six Sigma programs were follow ons to TQM programs.  In general, TQM wasn’t very successful, although at some companies it was.  And many TQM ideas have been incorporated into SS.  But sometimes some TQM idea becomes part of a solution in a SS project.
    This really isn’t somehow “different” or an “extension” to SS.  Normally a Black Belt would look to some domain specific discipline for improvement ideas.  In, say, an automotive manufacturing environment a Black Belt would likely look to mechanical engineering for sources of improvement. As often as not, the improvement lies in the basic physics of the problem.  Just because the BB is applying common engineering sense doesn’t somehow make it different from SS.
    So why not look to ideas like Lean, TQM, etc. for opportunities.  These are the areas you hear about most frequently, but there is no law that says a Black Belt has to limit his  view of the world to those things he learned in class.  A well rounded BB will study his application domain, and any other areas that might be helpful.
    We’ve seen threads recently where people were getting hung up in the accounting for “Lean” versus “Six Sigma” savings.  Well, that is a crock. SS is all about “show me the money”, and while DMAIC and DFSS are very powerful methodologies, they only describe the process that one uses to get to a result.  The SS BoK incorporates a large number of tools, but they are by no means all the tools you are “allowed” to use as a BB.  It goes without saying that there are a lot more domain specific, and even more general tools, that one needs to use to achieve a result.
    What is happening lately is that more SS practitioners are not excluding some tools just because they happen to “belong” to Lean, or TQM, or …. or … or
    –McD
     

    0
    #130302

    McD
    Participant

    If your calculation is correct, and you should check it, then what it is saying is that the process mean is outside the spec limits.  But if it is, you don’t need Cp or Cpk or even a control chart to tell you that.  You have a serious problem that needs to be addressed.  And it is an obvious problem!
    If your process mean is within the spec limits, and that is pretty easy to test, and your Cpk is negative, then your calculation for Cpk is wrong.  It is fairly easy in Excel to get the sign wrong!
    So look to your process mean.  If it is outside the specification limits, you have an obvious problem and you should forget about Cpk and control charts and all that nonsense until you at least get the process onto the right planet.
    When quality guys talk about a process being “in control”, they are usually referring to the Shewhart tests for a control chart.  But the assumption here is that generally, the process mean is somewhere near where it needs to be.   If your specification is 1 to 3, and you are producing product at 10, you don’t need a control chart to tell you that there is a problem.
    However, if your specs are  to 3, and your process is running around 2.5, then your Cpk can’t be negative.  A negative Cpk indicates a problem in your spreadsheet.
    Remember that for Cpk, you need the distance to the nearest spec limit.  So if X is the process mean, and LL and UL are the lower and upper spec limits, you calculate X-LL and UL-X and take the smaller of the two.  If X is between LL and UL, this result will always be positive.  Pretty easy in Excel to get one or both of those swapped.
    –McD
     

    0
    #130294

    McD
    Participant

    Cp is simply the distance between the spec limits (which must be positive) divided by 6 times the standard deviation (which must be positive).  Cp can therefore never be negative.
    Generally, Cpk is more useful.  Cp assumes that your process is centered between the spec limits, which is not something you can generally guarantee.  You can have a very good Cp and be producing all your product out of spec.  Cpk is the distance to the closest spec limit divided by 3 times the standard deviation.
    I guess if your process mean was out of spec. then you could come up with a negative Cpk.  This would be such a serious problem, though, that you should be fighting fires rather than worrying about your process capability.  What is says is that most of what you are producing is out of specification.  You would already know this, however, without calculating Cpk.
    If, for example, your specs were 1 to 3, and your process mean was 4, then you would have a negative Cpk.  But since you know your process mean is outside your spec range, you would already know you have a problem (unless you happened to be asleep).
    But as long as your process mean is between your specification limits, then Cpk must be positive.
    –McD
     

    0
    #64126

    McD
    Participant

    Srivats,
    The DMADV VOC process contains some valuable insight into requirements gathering and analysis.  In my opinion, the DFSS approach can be much more robust than the typical process used in most software shops.  However, to provide insight for your testers, you do need to flow the high level requirements you tend to get from your initial VOCs into the more detailed requirements that will ultimately flow down to design elements.
    While requirements gathering is often seen as a problem, don’t overlook the requirements management aspects of the problem.  Given that you have achieved L3, you clearly understand RM, but don’t overlook the opportunities in strengthening your RM process.
    Oftentimes what software shops lack is the ability to see hidden or latent requirements.  Customers often have things about their environment that are so obvious to them that they don’t bear mentioning, but to an analyst collecting requirements these things can be invisible.  Pay special attention to tools that help you get inside your customer’s head and understand what his life is like.  Focus on the work process the proposed application is supposed to facilitate.  Look for where the customer can find value in improving that work process.  This will lead to much more robust requirements, and this is the sort of thing you find in DFSS.
    –McD
     

    0
    #64125

    McD
    Participant

    A DMADV might be a little heavy for a green belt project, though.
    I would suggest you review the KPA’s and talk with your CMMI consultant about where you have strengths and weaknesses.  Perhaps on your L2 assessment you had the assessor give you a read on where you stood on the L3 KPAs.
    Now I would look for a KPA where you have a reasonable process that doesn’t quite cut it, but needs some improvement.  That might be a good place to look for a green belt project.
    In places where you need a whole new process, then the DMADV approach Cibele suggests would be preferred.  But that approach usually takes more than a green belt.
    If you got to level 2 without huge turmoil, I wouldn’t look at changing out your organization’s entire way of doing business.  Look at the KPAs one at a time and think through how you need to change.  Don’t overlook possible synergies between the KPAs and your responses, but don’t turn the problem into one that is so large you can’t solve it.
    –McD
     

    0
    #64128

    McD
    Participant

    You might have a little better response posting on the software forum.
    What you are asking is like comparing apples to shovels.  Although you might use a shovel to plant an apple tree, a comparison between the two doesn’t come immediately to mind.
    CMM(i) and SS are similar.  The Six Sigma DMAIC process can be a useful tool to an organization attempting to improve it’s CMMI results.  The Six Sigma DFSS process can be a useful part of the software process for an organization with a reasonable level of maturity.
    Conversely, an organization seeking to improve its software development process using DMAIC might well look to the CMMI for process improvement opportunities.
    So while the two aren’t totally unrelated, they aren’t replacements for each other either.
    Six Sigma provides through DMAIC, a well defined methodology for improving a processes capability.  Through DFSS, it provides a methodology for implementing products or processes which meet customer needs.  CMMI provides a measurement framework which can give one some insight into how a particular software process measures up to practices which have been proven to be successful.
    –McD
     

    0
    #130281

    McD
    Participant

    You might have a little better response posting on the software forum.What you are asking is like comparing apples to shovels.  Although you might use a shovel to plant an apple tree, a comparison between the two doesn’t come immediately to mind.CMM(i) and SS are similar.  The Six Sigma DMAIC process can be a useful tool to an organization attempting to improve it’s CMMI results.  The Six Sigma DFSS process can be a useful part of the software process for an organization with a reasonable level of maturity.Conversely, an organization seeking to improve its software development process using DMAIC might well look to the CMMI for process improvement opportunities.So while the two aren’t totally unrelated, they aren’t replacements for each other either.Six Sigma provides through DMAIC, a well defined methodology for improving a processes capability.  Through DFSS, it provides a methodology for implementing products or processes which meet customer needs.  CMMI provides a measurement framework which can give one some insight into how a particular software process measures up to practices which have been proven to be successful.–McDThis thread has been moved to the IT/Software discussion forum. Please click here to continue the discussion.

    0
    #130157

    McD
    Participant

    That is the only difference between the two.
    Except that the second calculation is only valid if you test the entire population, or substantially all of the population.  In practice, I’m not so sure it matters as long as the sample is large, whatever that means. 
    –McD
     

    0
    #130144

    McD
    Participant

    Naw … we should expect Stevo to work for his supper
    –McD
     

    0
    #130143

    McD
    Participant

    I guess I’m with billy b, but not just somebody, I’d like to see one of the big guns check in on this.
    I’ve only ever worried about Cp and Cpk.  There are lots of articles on this site on the subject.  As you have discovered, they tend to be somewhat contradictory, sometimes even within the same article.  Some of the articles say things that are just flat wrong.
    It appears to me that there may be an advantage to Ppk when the process has a slow drift.  In this case, Cpk will lead you to suspect you are producing fewer defects than you actually are.
    I’ve not seen any compelling case for Cpm/Ppm.
    –McD
     

    0
    #130111

    McD
    Participant

    Cpm is there to confuse the tourists.  I’ve never actually seen it used in the wild.
    Cp can grossly overestimate the capability of the process when the process is not centered.  If you think through it a bit, Cpk probably underestimates the capability of the process, since it more or less assumes that both spec limits are moved in, if you understand what I am saying, compared to Cp.  But clearly, unless the process is centered on the spec limits, or very close to it, Cpk is a whale of a lot better capability indicator than Cp.  Even if the process is centered, Cp is no better than Cpk.  For a centered process, Cp = Cpk.
    –McD
     

    0
    #130065

    McD
    Participant

    Generally, you want to look at the goodness of fit tests first, then the p’s of the coefficient and constant.  Minitab gives you three different goodness of fit tests, although they usually return similar p’s.
    –McD
     

    0
    #130031

    McD
    Participant

    This isn’t that complicated.
    Cp is simply a way of comparing the spread of your process to the distance between the specification limits.  If your process was perfectly centered between the spec limits, this would be an indication of the processes capability to produce on spec product.
    However, your process might not be centered.  So what Cpk does is essentially takes the tougher of the two spec limits an uses that one.  This gives you a better indication of whether your process can perform as it needs to,
    Essentially, it is Cp, not Cpk, that only give you half the picture.  It “pretends” that your process is centered and ignores the fact that it might not be. By using those “halves” in the Cpk calculation, you get the whole enchilada.
    –McD
     

    0
    #64122

    McD
    Participant

    LS
    One approach I have seen work for a larger company is to segment the company into businesses, functions, whatever makes sense.  Then have a program office for each segment.  The program office gets a budget, and the program manager works with business/functional leaders to understand the business/function priorities and initiatives, and with I/T to understand company-wide initiatives.
    Projects are required to document the business case before any consideration by the program office.  Generally, some business person can articulate the need and make the benefits case, but typically some I/T person will need to size the request.  This is a problem because, although it takes non-zero time, the time spent really needs to be constrained.  This process can easily become a career, so firm guidelines and templates are needed to ensure the right level of detail.  There could be a need for some category of specialist who is skilled at analyzing these requests and checking for reasonableness of the business case.
    Now the program office can match up these requests to the business/function priorities and parcel out projects to I/T.  As new needs come up, the program office is in a good position to understand priorities and what projects can be delayed/killed in order to make room for a new initiative.
    This sort of machinery can only work in a large organization which has recognized the strategic value of I/T.  In a smaller shop, some of the ideas might work, but the relatively large manpower investment probably isn’t realistic.
    If the organization really doesn’t “get it”, then all I/T can do is be hard-assed about what projects they will take.  Simply review the business cases, and initiate projects based on value.  Projects without a business case can get done whenever I/T has nothing else to do.  If that ever happens, you have too many I/T people.
    –McD
     

    0
    #64121

    McD
    Participant

    If you simply do the “Graphical Summary” in Minitab (which you should do first for any set of data just to have a peek at it), in the upper right hand corner it shows the results of the Anderson-Darling normality test.
    –McD
     

    0
    #129962

    McD
    Participant

    *Lean cannot bring a  process under statisical  control
    **Six Sigma alone  cannot dramatically improve process speed or  reduce  invested  capital.
    I suspect if you draw a clear enough line around “Lean tools” and “Six Sigma tools” then neither methodology will be able to accomplish anything.
    If you can’t bring a process under control, then how on earth do you expect to be able to improve it?  If you can’t improve the process, then what is the point of reducing variation?  I can make the variation zero by simply stopping the process.
    A bottleneck is a defect in any process.  Since it is a bottleneck, does that mean I can’t attack it with SS?  Since it is a defect, does that mean I can’t attack it with Lean?  Get over it.  There is only one golden rule — “show me the money!”
    Playing the name game is counterproductive.  I agree with Mike that sometimes you have to play the hand you are dealt, but I would expect a top consultant like Mike would make changing that hand a high priority.
    –McD
     

    0
    #129373

    McD
    Participant

    First of all, don’t stress too much over the statisics.  Six Sigma is about change, and while the statistics are an important tool, it’s not what SS is about.
    If you think it through, in SS you are making data based decisions, and to do that, you need to understand the extent you can believe the data.  So what you need to do is understand hypothesis testing first.
    Sometimes, you need to run some experiments to get the data.  You want to run these as efficiently as possible, so you need to understand DOE.
    Finally, when you have some data you are going to change something, and to understand where you want to change it to, you need a model.  So some model building skills are necessary.
    Of course, this covers quite a bit of territory.  Hypothesis testing particularly has all sorts of nooks and crannies to explore.  But more important is taking a methodical, scientific view of the problem, and, especially in transactional processes, recognizing that managing change can be a big part of the problem.
    –McD
     

    0
    #64112

    McD
    Participant

    Yes, of course.  Your customer experienced the defect so it escaped your test phase, and it escaped your development process.  I suppose you could argue that beta is part of your “test” phase, but I would be inclined to call it an escape since your customer saw it.
    –McD

    0
    #128864

    McD
    Participant

    One thing Robert said is, I think, key:”for an oral presentation I will confine the graphs to the couple of Y’s that are of the most concern”
    Regression is but one part of some larger story.  You want to communicate that story.  The tool is really of little interest to most people.  Black Belts sometimes really fixate on the tools to the detriment of the end result.
    Tell the story that the regression represents.  Save all those fit metrics for a big, fat, boring report aimed at the geeks.
    –McD
     

    0
    #64106

    McD
    Participant

    sanjiv
    At a high enough level, there is no difference.  Constructing software is a process, like any other process, and like any other process it succumbs to process improvement.
    As you dig deeper, there are a few differences.  Software development is a transactional process, and so the statistical tools employed tend to be a little different than manufacturing.  Maybe not so much different in the specifics, but really different in the emphasis.  When we are building models of parts of our processes, for example, our data is frequently categorical.  So we will use logistic regression, as an example, more frequently than one would in manufacturing.
    Also, often what we are measuring is people.  People tend to learn, and this makes gauge studies difficult or impossible in some cases.  It also means that we need to be careful about some of the measurements and/or solutions we put in place because people will try to give you the answer you want.  Sometimes measurement systems have implied incentives, and people respond to incentives, even if they are well hidden.  Machines do not.  This requires some extra care.
    While engineers in a manufacturing facility frequently understand their domain, software engineers often tend to ignore the body of knowledge of software engineering.  When we are looking for improvement opportunities, sometimes we have to refresh ourselves on what is already known in the industry.  As often as not, our own processes don’t incorporate these learnings.
    So yes, it is perhaps different in some subtle ways, but it is also very much the same.
    –McD
     

    0
    #128359

    McD
    Participant

    Ouch!,
    I would look at the old system, see where there may be opportunities for improvement, and keep in mind, as you apparently already realize, that the data will be useful for later process improvement.
    I would avoid leaping into some software system until you have some experience with a manual system.  Software does weird things to people, and often the software becomes the goal and the real purpose is forgotten, or lost due to quirks in the software.
    Get a system working that provides the data you need, then see if automating that system makes sense.
    –McD
     

    0
    #128315

    McD
    Participant

    I generally find that QFD is a nice predecessor to a Pugh Matrix.  Of course, a thorough VOC is a necessary requirement to start the QFD.  Before the VOC you need … well, you get the idea.
    Understand how good you need to be, at a very detailed level.
    –McD
     

    0
    #128314

    McD
    Participant

    best – l’Escaut in Terneuzen, the Netherlands
    Worst?  I have a short memory for bad experiences.
    –McD
     

    0
Viewing 100 posts - 1 through 100 (of 214 total)