read

Crowd

Cross-posted at Inside Higher Ed

When I wrote about the launch of online education startup Coursera back in April, one of the things that most intrigued me most was the description that founders Daphne Koller and Andrew Ng gave of their plans for a peer-to-peer grading system. I’ve been a critic of the rise of the robot-graders — that is, the increasing usage of automated assessment software (used in other xMOOCS and online courses, as well as in other large-scale testing systems). While some assignments might lend themselves to being graded this way, I’ve been skeptical that automation is really the way to go for disciplines that require essay-writing, despite the contention that robo-graders score just as well as humans do. Coursera said it would offer poetry classes (a modern poetry class starts this fall), and I just couldn’t see how even the most sophisticated artificial intelligence in the world could grade students’ intepretations and close readings.

So Coursera’s plans for peer assessment sounded pretty reasonable.

The plans sounded reasonable too, I admit, as I’m someone who’s used peer review a lot in my own classes. I like the idea of students writing for each other, not just for me as instructor. I think they benefit from seeing how their peers write and think, and I think the process of reviewing others’ work helps them recognize their own strengths and weaknesses in turn.

I found my students to be pretty fair with the assessments they gave their peers. They were neither too brutal nor too lax when they evaluated each others’ work. My anecdotal experience matches some of the research that suggests that when students assess their peers’ work, they do score similarly to the grades professors would give (although others have found that peer grades are higher.

But peer assessment in a class of thirty is very different than peer assessment in a class of several thousand, and based on some of the early feedback from several of the Coursera classes utilizing the peer assessment, there are some very serious challenges with doing so.

Laura Gibbs, a literature and mythology professor at University of Oklahoma who is currently enrolled in Coursera’s Fantasy and Science Fiction class has been documenting on her blog some of the struggles that she and other students are facing with the peer feedback element of the class. This particular class does require “essay” writing — I have essay in quotation marks there as the submissions can only be between 270 and 320 words — which are graded by peers with a score of 1, 2, or 3. One of the requirements for the class also includes grading others’ work in turn. So in other words, for every essay you submit, you must also grade four essays from fellow students. Some of the problems with peer feedback, according to Gibbs and others:

The variability of feedback:

Many students are unprepared to give solid feedback to one another, and the course has done little to prepare them for such. In global class like this one, there are issues with English as a second language, as well as the difficulties in general that college-level students have with their writing. Gibbs writes,

“In a class with 5000 active participants (although I think we are down to closer to 4000 active participants during the second week), there is going to be a whole range of feedback, from the very zealous people who give feedback longer than the essay itself, to the grammar police (yes, they are everywhere), to the ill-informed grammar police (the single most active discussion that I have seen on the discussion board was about US v. UK spelling - the Brits were not happy about being told that they needed to learn to use a spellchecker), and on down to the “good job!” people with their two-word comments, and finally the people who commented not in English or who offered incomprehensible comments that had been translated by Google Translate (or similar), and, at the bottom of the heap, the sadistic comments (your essay is bullshit, you are a complete idiot, I cannot believe I had to read this crap, etc.). Oh, and don’t forget the vigilante accusations of plagiarism based on misinterpretation of plagiarism detection software (yes, someone was accused of plagiarizing… from their own blog).”

The lack of feedback on feedback:

Although giving feedback is required in the Coursera science fiction class, there is no way for the students to give feedback on that feedback. Gibbs proposes rating the feedback in turn:

“The 3s would be for those people who are totally knocking themselves out on the feedback and doing a really super job (god love ’em)… most of the feedback responses would probably be 2s (people would have to decide for themselves how they feel about the very large group of “good job!” feedback providers)… and some of the feedback would be 1s, as a way to let people know that something went wrong.”

This could help the course instructors identify those who are continually leaving unhelpful comments.

(Sidenote: I see that Chuck Severance who’s currently teaching the Internet History class has removed the peer assessment component from its requirements. I guess this is one form of feedback on the feedback — a recognition by the instructor about what’s working and not working in the class.)

The anonymity of feedback:

No one knows who’s assessed one’s work; no one knows who they’re assessing. While ostensibly meant to protect student privacy, this raises some serious when you can’t ask for clarification, when you can’t guage your feedback based on what you know about the author and their particular strengths and wekenesses, and unfortunately when anonymity brings out Internet trollishness in some students who feel they can leave nasty comments without any repercussions.

The lack of community:

This is connected to the last point on anonymity. How does peer feedback work if students in the classes aren’t really peers? Sure, by a strict definition they are, I suppose as they’re all enrolled in the same class. There are opportunities for them to introduce themselves on the forums, but participation in the forums isn’t required, and many students simply don’t visit them. As such there is very little community created by the class itself — although some learners have created their own study groups and the like, both on- and offline. Can peer feedback really work in a setting where there is so little community and where this is little sense of reciprocity?

These are still early days for this type of grading mechanism in these new types of MOOCs. But even so, there are lots of resources that Coursera could be drawing from — instructors who have experience with peer feedback and those who’ve applied peer feedback to their own online courses. As it stands, one of the most intriguing things about the Coursera platform — that it wasn’t going to rely on robo-graders — may be one of its great weaknesses.

Photo credits: Espen Sundive

Audrey Watters


Published

Hack Education

The History of the Future of Education Technology

Back to Archives