Measuring Opportunities

Six Sigma – iSixSigma Forums Old Forums Software/IT Measuring Opportunities

Viewing 4 posts - 1 through 4 (of 4 total)
  • Author
  • #26645


    Here is a case for which i need help
    Our team does check for errors in  documents, which could range from arrangement errors, to typos to grammer to data errors. Each document is of similar nature, but could be of varying lengths.
    How can we know the no of opportunoities for defect. How do i then proceed to calulate the quality of the “document checking staff”?



    Are you checking the documents electronically or on paper?  If you are checking electronically, then you could count each word or keystroke as an opportunity for defect.  If you are checking on paper, then each page could be an opportunity for defect (since the page would need to be retyped).
    To check the “document checking staff”, you would need to randomly select documents and check for defects counting yourself as the expert and determine the errors the staff member found compared to the number you found.  The document you check for errors should not show the errors already found by the staff.  You could also have multiple staff check the same randomly selected document and compare the errors each found against the errors found by each of the others.



    Don’t get too caught up in the weeds on this.  As long as an opportunity is defined and is meaningful to you, that is what is important.
    If the type of error is meaningful such as grammar, data, arrangement, as you mentioned.  Use each type as an opportunity (category), that way you could stratify your data by opportunity afterward.  If it isn’t then keep it as straightforward as possible – 1 document = 1 opportunity.
    The only thing to remember is that the greater the number of opportunities, the greater your sigma value will likely be.  However, if you use the same measurement system for your baseline and improvement, what you are really interested in the relative improvement from your baseline sigma value, so the number of opportunities is really irrelavant.
    As for measuring reviewer effectiveness, it sounds like you have to do a Gage R&R.  There is plenty of information on this on thuis site.
    Hope this helps,



    As for measuring reviewer effectiveness, it sounds like you have to do a Gage R&R.
    I’d be a little careful on this.  When you have something like a document that is quite unique, and more importantly, recognizable as an indivdual, it is pretty impractical to do a Gage R&R.  This does not mean, however, that you shouldn’t ask the questions.  It is important to understand just how good your measurement system can be expected to perform.
    Here’s the problem … if you hand a tester a dozen bolts one at a time, but within that, you hand him the same bolt twice, he won’t recognize it as the same bolt and he certainly won’t remember his previous measurement, so you can consider the replicate to be a new measurement.
    That isn’t practical with a document.  When you hand the reviewer the same document a second time, he will immediately recognize it as a document he reviewed before, and will probably also remember what he found, so you can’t count on it being a new measurement.  You are simply writing down the same result a second time.
    This is a similar problem to software inspection.  You can’t really do a complete Gage R&R, so you need to do what you can, and ask penetrating questions to gain some confidence in your measurement system.

Viewing 4 posts - 1 through 4 (of 4 total)

The forum ‘Software/IT’ is closed to new topics and replies.