Skip to Content
Uncategorized

Does the SAT Reward Second-Rate Writing?

The new SAT writing test may privilege quantity over quality.
September 1, 2005

Good writing–it’s easy to describe and hard to produce. Most people agree that it’s accurate, economical, nuanced, and thoughtful. In March, Leslie Perelman, head of MIT’s Writing across the Curriculum office, attended a panel discussion of the new SAT writing test hosted by the National Council of Teachers of English; he came away convinced that the new test rewards exactly the opposite qualities. At the event, representatives of the College Board presented the audience of teachers with examples of high-scoring writing tests. According to Perelman, the examples contained factual errors, overblown prose, and tangential arguments. After examining nearly 50 of these examples, Perelman concluded that “there was a 95 percent correlation between length and grade. Nothing human is that well correlated by chance. People are unpredictable.”

The new essay section first appeared on the SATs this year, so there are no long-range data to confirm or refute Perelman’s assertions. At a minimum, the test reveals a student’s ability to follow directions and address a topic. But Perelman doesn’t believe that the College Board has carried out its intention to test students’ skill at developing arguments. The section gives students 25 minutes to write an essay on, say, whether it’s ever okay to tell lies. “It’s a 25-minute test, so think for five minutes and write for 20,” says Perelman. “If somebody thinks for 10 or 15 minutes, they’re not going to write very much. But they’re going to write something more thoughtful.” Perelman believes, however, that better-conceived essays are still likely to be graded according to word count.

Perelman is in a good position to complain: he’s designed his own writing test, which has been used successfully at MIT since 1998. Other colleges–including Caltech, Louisiana State University, and Cornell University–have joined MIT in using Perelman’s writing placement examination for their incoming students. In Perelman’s test, students are given four or five articles from magazines such as the Atlantic, the New Yorker, and Scientific American. The articles all make points about a common theme; in the 2005 test, for instance, the theme was global warming. Five days later, students are given their topics and asked to take a few hours to write 1,000-word essays summarizing the articles and advancing their own arguments. Perelman’s test gives students the opportunity to reflect, revise, and show their ability to read and understand challenging writing intended for adults. The test is designed to measure students’ skills in academic writing by mirroring an academic situation, which is where the SAT writing exam falls short, Perelman maintains.

After Perelman voiced his concerns in an interview with Michael Winerip of the New York Times, the College Board released a statement addressing his criticism: “There is a simple explanation for this correlation. The College Board’s goal in selecting samples for initial training and for practice tests is to find essays at each score point that demonstrate all criteria of that score point. And one important criterion is development.”

After the New York Times article was published, Perelman says, he received calls from local high-school teachers thanking him for pointing out what bad habits they were being asked to ingrain in their students. Nevertheless, MIT admissions will be considering applicants’ scores on the test and will require a writing score from students taking the ACT as well, starting in 2007.

Whether or not the scoring system for the SAT writing test privileges quantity over quality, it will influence high-school students’ education in writing. And Perelman–who has recently spoken to several news outlets about the test, including NPR and Salon.com–will undoubtedly remain vocal about its shortcomings. – By Catherine Nichols

paragraph one

Rest of the article

Keep Reading

Most Popular

Large language models can do jaw-dropping things. But nobody knows exactly why.

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

How scientists traced a mysterious covid case back to six toilets

When wastewater surveillance turns into a hunt for a single infected individual, the ethics get tricky.

The problem with plug-in hybrids? Their drivers.

Plug-in hybrids are often sold as a transition to EVs, but new data from Europe shows we’re still underestimating the emissions they produce.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.