Monday, April 30, 2012

Are Robo-Readers the Problem or Are We?


                As soon as I heard the prompt for this essay, I knew what my stance would be: it’s completely ridiculous that computers can grade writing effectively. However, as I read the sources, I realized that there was more to the topic than first appeared.
                The advantages of “robo-readers” are obvious. They are quite simply more efficient and cost-effective than teachers at grading essays. For instance, the e-Rater “can grade 16,000 essays in 20 seconds” (Michael Winerip, Facing a Robo-Grader? Just Keep Obfuscating Mellifluously). I doubt there will be any folk songs about any teacher who can beat the machine in this scenario. Moreover, with the new government policies and increasing amount of standardized testing, the sheer number of essays to grade is nearly insurmountable. As Winerip said, “there’s got to be some way to keep up with this stuff” (Michael Winerip, Robot Eyes As Good As Humans When Grading Essays). “Robo-Readers” could provide the solution.
                Unfortunately, the drawbacks could be more than enough to prevent them from ever coming on the scene. The big question is this: “can a machine that cannot draw out meaning, and cares nothing for creativity or truth, really match the work of a human reader?” (Steve Kolowich, A Win For the Robo-Readers). Many sources say no. The biggest issue is that whereas a machine can test sentence structure, diction, or mechanics, it cannot judge the argument itself. The article, “How the e-rater Engine Works,” provides the features that the “robo-reader” looks for in students’ writing, but nowhere in the list is there anything about the legitimacy of the argument or the appeal of the writing. Computers may be able to analyze a few words here and there, but it is, quite simply, impossible for them to judge what good writing actually is.
                However, despite this limitation, studies reveal a disturbing fact: “In terms of being able to replicate the mean [ratings] and standard deviation of human readrs, the automated scoring engines did remarkably well” (Steve Kolowich, A Win for the Robo-Readers). This may seem like a success, but the fact that teachers are looking for the same basic and irrelevant criteria as the machines is very much a loss. Perhaps instead of questioning whether or not we should be using the robo-readers, we should be questioning the methodology of our own teachers. Perhaps instead of increasing the amount of useless writing students are forced to do, we should grade the existing amount of writing with more of an emphasis on what’s important.

No comments:

Post a Comment