Why Cataloguing Thousands of Essays Still Doesn’t Tell You Whether Student Writing Has Improved

According to a recent article in Canada’s largest national newspaper, students and young adults have not become worse writers since the advent of new(ish) technologies like text messaging, Twitter, and Facebook. If anything, says the article, writers have become better over time in part thanks to new opportunities to write and create with these new media. To substantiate this claim, the article cites “some hard data” published by Stanford’s professor of writing and rhetoric, Andrea Lunsford. The Lunsford study examines college Freshman Composition papers from as early as 1917 through 2006, in four rounds: 1917, 1930, 1986, and 2006. For each of these years, Lunsford reports data that might suggest aspects of writing quality, including the average paper length and the average error rates per 100 words. Lunsford reports these years because they are the years in which either she or previous scholars examined samples of student writing. As the chart shows below, Lunsford asserts that (i) student writing is no more error-prone than it was nearly 100 years ago, and (ii) students write significantly more words per composition.

1

Based on these findings, the Canadian article that I initially mentioned concludes that student writing has improved. However, the Lunsford study is still a far cry from providing definitive proof that student writing is currently improved. I can think of several problems:

  1. The data analyzed does not examine the “current” state of student composition described in the article.  The most recent data relates to 2006, while Twitter first launched in July 2006.  Therefore the surging popularity and impact on writing style of Twitter is not likely captured.  The same can be said for the other social media cited, who also enjoyed increasing popularity and currently impacts students much earlier in their academic career than was occurring in 2006.
  2. Word processing technology utilized by the more recent students usually corrects or at least identifies spelling and grammar errors.  Therefore most of the student’s errors are corrected by the student (often using suggestions by the computer) before the paper is turned in.  The fact that there are flat or increasing error rates with such technology indicates that students are more error prone.
  3. The compositions are evaluated out of context. Most importantly, the evaluators don’t consider the nature of the assignments for which students submitted their writing. Absent further contextual clues about the assignments, evaluators might erroneously conclude that students in 1917 wrote incredibly short essays because they had little to say. Suppose instead, however, that we later learned that the papers from 1917 were written in response to an assignment that limited response length to no more than a paragraph. In this case, students in 1917 were not necessarily any more or less expressive than their modern counterparts.
  4. Because the Lunsford article cites four separate studies, separated by up to 90 years, the methodology of classifying errors and counting words may fluctuate widely from study to study and make comparisons difficult.
  5. Probably the biggest objection to using this as proof of improvement in writing quality is that the two variables studied, paper length and error rates, are remarkably poor proxies for actual writing quality. We don’t have to stretch our imaginations much to envision long papers that are absolutely horrid pieces of writing. Similarly, when we picture prose that is devoid of spelling or grammatical errors, we shouldn’t automatically assume that those papers are engaging or persuasive.

For the above reasons and more,the study cited is poor support for the claims in the associated article.

Permanent link to this article: https://betweenthenumbers.net/2013/09/why-cataloguing-thousands-of-essays-still-doesnt-tell-you-whether-student-writing-has-improved/

Leave a Reply

Your email address will not be published.

*