A language researcher and former MIT professor is giving computer-generated essay marking systems a failing grade.

Les Perelman, who recently retired from his writing research post at MIT, says that with the passage of several education bills and initiatives in the U.S., such as No Child Left Behind and the Race to the Top fund, computerized marking is becoming increasingly prevalent.

LISTEN | Automated grading FAIL on CBC's As It Happens

“Kindergarten through Grade 12 students are going to be tested every year and fairly frequently. Consequently, that costs a lot of money,” Perelman told Carol Off, host of CBC’s As It Happens from his home in Cambridge, Mass., where MIT is also located. 

“There was a desire to see if this could be done on the cheap by having machines rather than human beings actually grade papers.”

The problem, according to Perelman, is that the marking software is unable to grade written essays accurately.

To prove his point, Perelman developed the Basic Automatic B.S. Essay Language Generator, BABEL for short – a program that produces entire essays from only a few key words. The essays often score very high grades, despite consisting entirely of “complete, incoherent nonsense.”

“I did this as an experiment to show that what these computers are grading does not have anything to do with human communication,” Perelman said.

When Perelman generated an entire essay from the words "Fair Elections Act," the essay scored a 90 per cent.

Perelman says BABEL demonstrates that using computerized marking to grade essays on standardized tests is folly, and produces results that do not accurately reflect the students’ submissions.

“I believe in something that I call ‘Perelman’s Conjecture’: that people's belief in computerized essay marking is proportional to the square of the intellectual distance to the people who actually know what they’re talking about,” Perelman said.