iSixSigma

Question On DMAIC Vs. DMADV

Six Sigma – iSixSigma Forums Old Forums General Question On DMAIC Vs. DMADV

Viewing 12 posts - 1 through 12 (of 12 total)
  • Author
    Posts
  • #38748

    Stephens
    Participant

    I am working on a reporting project that attempts to improve a data quality reporting system (i.e…. tracks underwriting input errors in system).  Basically, the CTQ is the data quality measure.  However, the project charter is stating that the definition of data quality needs to be redined in terms of what makes better sense.  The charter also states that there are a lot of manual steps in the reporting preparation process and that improved efficiencies is a nice to have. 
    This seems like a DMADV project, not a DMAIC project.  Does anyone out htere have a diferent opinion?  Also, does anyone have ideas as to how to make the main CTQ (data quality measure) measurable?  It seems like an output, not a CTQ.
    Thanks,
    Craig

    0
    #116441

    Mike Carnell
    Participant

    Craig,
    Answer two questions:
    1. What is a defect?
    2. How do you measure it?
    That is the easiest way to start any project.
    As far as is it DMAIC or DMADV – who cares. Get a decent charter (which you should have gotten from the Process Owner) and a financial analysis. You need to figure out if the project is worth doing before you try to stick it in some catagory.
    Just my opinion.
    Good luck.

    0
    #116442

    Stephens
    Participant

    Mike,
    Thanks for thre reply.  Really, the project seems like it is revamping the current way we measure data quality.  For example, the charter says “the scorecard metrics, calculations, weighting needs to be redefined”.  To me, the defect is that the reporting tool is not meeting customer expectations because the customer requirements have changed.  As far as measuring the defect is concerned, I’m assuming customer satisfaction on the current measuring system would be the best way to guage that.  Any other thoughts? 
    Thanks,
    Craig

    0
    #116446

    Lomax
    Participant

    Hi Craig,
    1. If the big Y is data Quality which definitely is an output but is also a project deligh parameter, a defect will be any deviation from the customer expectation in terms of data quality. For ex: If accuracy of data is one CTQ, defect can be measured as financial aerrors, statistical error, or typo error.
    2. Craig,  I also feel that this is a DMADV project if we have the task at hand to set up the data capture mechanism / putting up the data quality reporting system. It will become a DMAIC project if the problem at hand is with the quality (accuracy, % right etc.) of data is not meeting requirements. And then setting up the right data capture mechanism can become a step in M phase (by strengthening the measurement sytem).
    Thanks
    Neil

    0
    #116447

    McD
    Participant

    Don’t put the cart before the horse!
    Do Measure.  Get everyone to agree on what it takes to be successful.
    Do Analyze.  Understand what knobs you have to turn.
    NOW, at the end of analyze, ask yourself can you change the current process to achieve the measures you established?  If the answer is yes, it’s DMAIC.  If you need to reinvent the whole thing, it’s DMADV.
    But don’t stress over it until you have the data.  Go do Measure, go do Analyze, then do what we do in Six Sigma — make a data-based decision!
    –McD
     

    0
    #116449

    Mike Carnell
    Participant

    Craig,
    Go back and ask them why it needs to be redefined (that is an answer not a problem). How did they decide that?
    Customer satisfaction as a metric is a difficult measurement. On a really good day it is fickle and reflects the most current issue – good or bad which makes it difficult to use to evaluate improvements. If you have to use it take a look at how Lexus does their surveys.
    Just my opinion.
    Good luck

    0
    #116450

    Stephens
    Participant

    Actually, I meant ot say “gather customer requirements and probe deeper” and then translate these to CTQ’s – not so much a customer satisfaction measurement.  Thanks for the comments….

    0
    #116456

    BTDT
    Participant

    Craig:
    Pondering
    1) Customer/agent signs and mails application
    2) Loop 1 start
    a. Receive and review application
    b. Receive and review requirements
    3) Make underwriting decision
    4) Loop 2 start
    a. Policy print and mail to agent/customer
    b. Coverage revision, mail back to underwriter
    5) Policy in force
    6) Commission issued and receivedStep 2 has a large cycle time driven by application defects
    Step 2b has the largest and widest cycle time
    Step 4 has a large cycle time driven by defects and changesMy concern is that someone has decided that errors, or tracking them is the project defect, when the defect really is the long, variable cycle time. Not all errors of data quality will effect the cycle time the same way. Some errors will affect loop 2 while others will affect loop 4. This is made complex by the internal customers (agents) who want the fastest first printing (step 4a) to secure the business with the policy holder, and the external customers (policy holders) who want the fastest cycle time to step 5.The data quality measurement is, in my opinion, one of your Xs, not a CTQ or project Y. The defect definition should be “Anytime a policy takes longer than XX days beginning when the application is signed and ending when the policy in force”.The data quality issues are a little vague from your posting. This could include poor quality information as part of the application process (Plan type, maximum client age), process variables not distinguishing the first from the second or subsequent underwriting reviews, or process variables not distinguishing the first from second or subsequent policy printings.Finally, if the preceding project has already been completed, then I want to know the relationship between data quality errors and cycle time to be able to quantify the financial benefit of cash flow, revenue enhancement, or workforce utilization.BeenThereDoneThat

    0
    #116461

    Advisor
    Participant

    Hi craig
    i think this more of the issue with th measurement system.if the customer is not happy with the currnt measurement system and how we track the output then u need to do a BPMS and not DMAIC or DMADV.
    What is BPMS and the steps inolv i have mention below.
    Business Process Management System (BPMS)- a nine step model enables companies to model, deploy and manage mission-critical business processes, that span multiple enterprise applications, corporate departments. BPMS is usually used for lesser mature processes to make them Repeatable & Reliable.The nine step approach includes:1. Create Process Mission with th customers2. Document Process3. Document Customer & Process requirements4. Identify Output & Process Measures5. Build process management system6. Establish data collection plan7. Process performance monitoring8. Develop dashboards with spec limits & targets9. Identify improvement opportunitiesThis is my view

    0
    #116468

    Ron
    Member

    since we are in a six sigma web site let’s try and utilize the methodology of six sigma.
    That methodology is the DMAIC process. Many companies try to put there spin on it like GE and DMADV or the Joiner 9 set process.
    Bottom line use what works for you and you get results but having a baseline like the DMAIC methodology is the best approach..standardize then improve.

    0
    #116914

    Scott
    Member

    Craig,
    The truth is unless you have a valid customer specification, you really don’t know.  First, it sounds like you have an existing process for data capture and a lot of “improvement wishes”.  This sounds like two projects I had when I first joined banking to improve CRM data quality and later when I led customer delight improvement for the Fleet merger (lots of client record clean-up!). 
    In both cases I had to step away from “data quality” and ask how the data was used in the context of a larger process (map the overall process first and figure out what the data was used for) and basically apply MSA (measurements systems analysis) to the work. 
    The most difficult step was determining the attributes of the data record.  In service industries, like banking, its recorded information – such as name, address, etc.  In product/process industries – harkening back to my old days with DuPont – its things like polymer viscosity, etc.  The nuance is that in service industries the data is almost always the data – a name and how its spelled – and operator entry error is most often the issue (fat-finger keying or omission, changing dates on birthdays).  In product/process you might get into tolerance analysis or propogation of error analysis to look at the uncertainty in a reported number.  Whew!
    The next step was sorting through levels of data accuracy.  Is it complete (pass/fail – is the data there)?   Is it valid (formatted correctly)? Is it “accurate”?  This latter element often requires some level of auditing the data.  For example, in a CRM system the only way I might know if I have Jane Doe’s address right is to ask her.  In my days at DuPont when I often had 6 major global customers that was easy.  In banking where we have something approaching 100 million customers with 3-4 accounts each we resort to statistical sampling for accuracy checks.
    The next step was translating how many “defects” in the data set constituted a “defective record” as some data was “must have” and some was “wish to have” – particularly in the context of Patriot Act regulations in banking.  Finally, we translated the information into a simple yield calculation (i.e. 91% of records were not defective) vs. a customer’s end-use process specification.  This is where you can begin to the capability of your data to meet the needs of the process its serving.
    Let’s say for discussion purposes that my “First Pass Data yield” was 50% (complete, valid, and accurate).  Clearly this would seem a candidate for DMADV, if the customer specification is 90%.  However, my coaching is do a blitz to determine if there are quick improvements first and then fall back to the DMADV approach.  Often you will find one step in the error that is generating most of the defects.  Mistake proofing often will work wonders.
    Here are some great articles on data quality improvement you might find of use.
    http://www.nrsweb.org/traininginfo/dataquality.pdf
    http://www.dmreview.com/whitepaper/WID1013299.pdf
    http://www.thecre.com/pdf/imf.pdf
    Cheers and good luck data mending!
    Rick Otero
    Business Banking Q&P Executive
    Bank of America

    0
    #116917

    Sinnicks
    Participant

    Do know where I can get more information on your BPMS?

    0
Viewing 12 posts - 1 through 12 (of 12 total)

The forum ‘General’ is closed to new topics and replies.