Measurement system analysis – essay grading
Six Sigma – iSixSigma › Forums › Old Forums › General › Measurement system analysis – essay grading
- This topic has 7 replies, 8 voices, and was last updated 14 years, 6 months ago by
GB.
-
AuthorPosts
-
April 19, 2007 at 1:42 pm #46771
Hello,
I am a part of the Six Sigma effort in our company – a BPO company. We are working on the recruitment process. All the new hires in our company are supposed to appear for recruitment tests. Essay writing is one of the important part of our recruitment tests. Now, here’s the problem – Essay grading. There’s a huge discrepancy in essay grading as different people are involved in this process. The purpose is to remove the inconsistency in the grading process irrespective of the type/number of people.
Per my understanding, the first step would be to make sure that whosever is involved in grading the essay has same level of English proficiency. But this step itself is highly subjective.
Can anyone help me in setting up a process?
Thanks,
Pal0April 19, 2007 at 2:25 pm #154976
Jim ShelorParticipant@Jim-ShelorInclude @Jim-Shelor in your post and this person will
be notified via email.Pal,
What you are attempting to do is extremely difficult, as you pointed out.
One question I would have is how are you conducting the grading?
Do you have 1 person grading each essay, or do you have multiple people grading each essay?
One way to mitigate the difference is to have 1 person, for example your tech editor grade the essays.
Another way to mitigate the variation in the grading is to have 5 people grade each essay and take the mean of the 5 grades as the grade for the essay.
determining if the graders have the same level of proficiency would be as hard, or harder, from a grading perspective as the method you are using now.
I hope this helps,
Jim Shelor0April 19, 2007 at 2:44 pm #154979Hi Pal,
I would work out some grading guidelines – unless of course you already have one. I’m thinking of something like listing what are the “requirements” you have towards the recruitees – 1. Basic knowledege of statistics :)
2. English proficiency
3. Creativity
4. etc.Then go one level below and think what are the criteria for demonstrating a basic knowledge of statistics – depending on the subject of the essay of course.
E.g. :1.1 Correctly mentions two-sample t-tests in the context of the essay
1.2 Is aware of sampling strategy
1.3 etc.If you have such grading guidelines then you can figure out where you have the biggest variability and work from there.
I hope this helps a bit.
Regards
Sandor0April 23, 2007 at 11:07 am #155111
accringtonParticipant@accringtonInclude @accrington in your post and this person will
be notified via email.The example given in the Help section of Attribute Agreement Analysis in MINITAB is a study on five examiners who grade essays. It might be a useful place to start.
0April 23, 2007 at 12:42 pm #155121Welcome to the mad world of six sigma in bpo recruitment and good luck getting some method to the madness.
You want to set up a process for essay grading. I see that part of your question also involves uniformity in grading. I would sufggest the following steps:
1. Identify exactly what parameters you would be looking for, just the same way as you would for call quality. Define and list what is it that your grading would depend on – spelling mistakes/ punctuations missed or misused/ incorrect sentence structures/ worong or missing prepositions/ conjunctions/ lack of clarity etc.
2. Get some members (the ones with real good language/grammar only) to list down the grading system as per which parameters are critical. Arrive at a points system or a binomial grading system (follow the pattern of a call quality grading system, should be easy)
3. Find and fix a team of people who would take up all the essay assessments. Its pointless for R&R if every tom, dick and harry is involved in this process.
4. Get the team to understand the whole idea, the system and the objectives. Have them go through a training session if needed. Take in suggestions whenever suitable.
5. Start with a bunch of old essays. Let everyone in the team grade them for practice. Perform an R&R (callibration as in common bpo speak for R&R)
6. Once everyone is set within control limits, let them loose on the job.
7. Perform periodical R&R’s and a proper analysis and follow through for each r&r.
The principle things are
a) Having a limited no. of people grading essays
b) Identifying the parameters to look for in the essays (so that English proficiency levels are limited to established rules of grammar rather than subjectivity). Go by the route of looking for defects in composition and avoid looking at content as much as possible.
My two penny’s worth. Get back if you have any doubts/questions.0November 20, 2007 at 3:52 pm #165174Pal,My name is Curtis Atkisson. In response to your question, there exist many automated essay grading services that may do exactly what you are looking for. These programs address the objectivity issue very well. A program such as this would also save a lot of money in your graders’ time. Different programs use different techniques and the each have their own pros and cons.
0November 20, 2007 at 4:18 pm #165175Great suggestion. I was getting ready to direct to the same example. Nice demonstration of Attribute Agreement Analysis.
0November 20, 2007 at 4:22 pm #165176Curtis,
tsk, tsk, tsk…
“…Do not promote products, services or businesses on the discussion forum. The iSixSigma discussion forum is not a medium for user or business advertising. Violators will have their post removed and could be banned from further postings. Please read iSixSigma’s full policy on promoting products and services on the forum….”0 -
AuthorPosts
The forum ‘General’ is closed to new topics and replies.