# Reporting of data / 6 Sigma in data entry environmen

Six Sigma – iSixSigma › Forums › Old Forums › General › Reporting of data / 6 Sigma in data entry environmen

This topic contains 2 replies, has 3 voices, and was last updated by Jonathon Andell 15 years, 8 months ago.

- AuthorPosts
- November 10, 2003 at 7:03 pm #33825
As a newbie I’d love to learn from your expertise and appreciate your desire to help. My question is fairly straight forward.

In our environment we key data from documents into a database, there are 19 unique elements to each document. Last month we sampled 300 of these items and reported 30 errors with corresponding quality at 90%. I am besides my self because in my mind these 300 items could have as many as 19 errors per item. The way I see it we had 5700 opportunities for errors and found 30 meaning our quality is really at 99.54% with a 4.06 sigma.

If I am correct how can I convince my superiors ? They state they don’t want to report this number because historically we have showed quality in the 90% range and it would look as if we manipulated the numbers. my whole point is that every other group in the country measures quality in the manner I laid out, as such we look like crap in comparison to all of the other groups, all I’m doing is levelling the playing field. I’m also at a point where all I hear is “fix” quality. I’d almost contend it’s not broken, we have opportunities to get better but I don’t know that I’d define 30 errors of 5700 opportunities as broken. This whole concept even gets more bizzare when you realize that of the 30 errors 10 were typos. The # of opportunities for typos on this 300 item sample is roughly 45,000 ( 150 keystrokes x 300 items sampled ) if you do the math this way you can break out quality in to two unique subsets, knowledge based accuracy ( 20 errors / 5700 oppty ) and keying accuracy ( 10 errors / 45,000 oppty )

Iwelcome your thoughts and appreciate your time0November 20, 2003 at 9:41 am #92724While measuring any process for failures or defects, it is essential to have clarity on what exactly we are going to measure and improve.

Before going into technicals, let us have the ground rules clear:

1] Always compare apples to apples. In case you want to express the results with number of documents as the denominator, the numerator cannot be number of errors over the sample. It has to be number of documents with errors only. This gets us to the concepts of “defects” and “defectives”, discussed below.

2] Metrics used for measurement have to be consistent over time and across geographies etc. so that all talk the same language at all times.

Now, let me request you to think about your process in terms of “defects” and “defectives”.

When you say that you have 30 errors, please be sure that you are referrring to the total number of mistakes that have occurred in keying in those 300 docs. If it is so, we are talking about 30 “defects” in the process when sampled for 300 docs. Going by the defects logic, what you say is perfectly all right, and you probably need to translate that into a metric which has more intuitive appeal, i.e. the DPMO or you can use an attribute sigma calculator. Simple calculations with your figures give the following results:

DPMO: 5263, Sigma level: 4.06

It will be pretty easy to also convert your historical data to the above metrics to arrive at a baseline.

In order to guage the impact of your improvement actions, you can perform the same calculations and compare. Here, lower is better for both the above metrics. This was about defects.

On the other hand, “defectives” means the number of documents which were defective. Therefore, you need to find out in how many documents these 30 errors were found. Once you have this data, you can express the same as a % to the total sample size, to get a metric which would make some sense, however not as much as DPMO and the sigma level.

The choice depends on what is more important to your customers! Using both the above concepts in tandem also can lead to good insight into the process behaviour. e.g. in case a particular data element is going into error more often than the others, then may be the data entry operators need to be educated more about this data element, or there may be a system bug which causes repeated errors for the same data field. You can also easily construct a matrix in which you plot data element vs. operator to really know what is happening. This kind of insight may not come if you go the “defectives” way!

Hope this helps!

GP.0November 20, 2003 at 6:50 pm #92764

Jonathon AndellParticipant@Jonathon-Andell**Include @Jonathon-Andell in your post and this person will**

be notified via email.First of all, I concur with your numbers. Based on 19 “Data fields” per document, times 300 documents, your opportunity count is correct. You can report your defect rate either as 30 errors in 5700 opportunities, or as 30 errors in 300 “units.” I share your concern regarding keystrokes as the basis for counting opportunities.

Bear in mind, that computations like throughput yield are based on defects per unit (DPU), which in your case is 30/300 or 0.10. When counting opportunities gets a bit “fuzzy” I advocate sticking with DPU.

However, in your case either approach meets the number-crunching needs. For now, I’d suggest you put aside the statistician’s hat and ask: how do those defects impact this business? How many resources do we consume as a result of those errors? What’s the “Cost of Poor Quality” (COPQ)?

Generally, we cannot capture every cost attributable to those defects. In fact Deming said that well over 90% of such costs are unknown and unknowable. However, the fraction you will be able to capture probably will be surprisingly high.

If you can develop a credible estimate for COPQ, your leadership should be able to understand the need to improve. After all, dollars are their language…0 - AuthorPosts

The forum ‘General’ is closed to new topics and replies.