Volume 6, Issue 1: 2013

Afterword: Volume 6, 2013

by Diane Kelly-Riley and Peggy O'Neill, Editors

Volume 6 of the Journal of Writing Assessment, our second as co-editors and the third as an online open-access journal, is complete. Looking back at the five articles we published, important themes emerged related to the impact of digital technologies on writing assessment and the connections between rubrics and raters.


Two pieces address automated essay scoring (AES), an important topic in public and educational discussions about the assessment of writing. The first, "Automated Essay Scoring in Innovative Assessments of Writing from Sources," by Paul Deane and Frank Williams, of ETS, and Vincent Weng and Catherine S. Trapani, Fordham University, specifically links research sponsored by ETS to the testing demands of the Common Core State Standards, an ongoing topic of interest for many in the K-college writing communities. In the research reported here, the authors examine the use of AES for writing from sources. They reason that AES can achieve similar levels of accuracy to human raters in certain contexts, but they found that this level of accuracy does not remain the same when examining the application of general and genre-specific rubrics in different contexts. Automated scoring, they conclude, needs to be supplemented by additional sources of evidence (or assessment) in order to ensure that the entire construct of writing is assessed.

The second article that tackled AES is by Les C. Perelman, a long-time critic of AES. Perelman provides a critique of an influential report by Mark Shermis and Ben Hammer's "Contrasting State-of-the-Art Automated Scoring of Essays." Many claims about the validity of AES and its potential to be used in writing assessments reference Shermis and Hammer's work. Perelman questioned the methodologies used by Shermis and Hammer to validate procedures and to compare essays scored by humans and computers. Since AES is seen as a viable tool for the assessment of student work, Perelman raised important issues that need to be addressed before high stakes exams are assessed via AES. Although very different types of scholarship, taken together, Deane et al.'s and Perelman's articles extended the conversation about AES and suggest that serious limitations still exist with AES, particularly for use with high stake assessments.

The focus on the use of computer technology to assess student writing continued with "Big Data, Learning Analytics and Social Assessment, " by Joe Moxley, of the University of South Florida. Moxley explored the assessment of student work using digital platforms and his project demonstrated the ability to assess multiple sections of student writing across a large university setting. Moxley's project raised concerns about the use of "big data" and the role of surveillance in the teaching and assessing of writing. The use of digital programs, such as the one Moxley used, enables a view of student work that was previously difficult to put together at all, let alone in real time while the class is in progress. Important questions need to be addressed regarding students' rights, teachers' rights, and the role of administrators to insert themselves into an instructor's class, among others. While Moxley reported on a locally-based system used across his composition program, the issues are relevant to many other online systems as well, especially those developed by private companies. Given the issues raised by the use of these programs and the "big data" they generate, we think this article demonstrates the need for a statement of principles or an ethics by our professional organizations about the use of such emerging "Big Data" enterprises.

The remaining two articles in this volume are also about technology, but the more traditional, low-tech kind associated with writing assessment, rubrics. In the first of these two, "Using Appraisal Theory to Understand Rater Values: An Examination of Rater Comments on ESL Test Essays," Carla Hall and Jaffer Sheyholislami of the University of Ottowa examined rater comments on essay tests written by students learning to speak English as a Second Language. Their work furthered the scholarship on rubrics by examining how raters construct what makes up "good writing," especially as it applies to multi-lingual writers. Continuing the interesting exploration of how teachers' values can serve as the basis for assessment criteria, they provide an innovative application of appraisal theory as a methodology to guide the assessment of raters' evaluation of student work.

The second piece on rubrics, by Hee Jin Bang of the National Writing Project, examined the reliability of the National Writing Project's Analytic Writing Continuum (AWC), a rubric developed by NWP and used in a variety of different contexts. Bang analysis focused on reliability from consensus, consistency, and measurement approaches. Not only did she aim to show the reliability of the AWC as an instrument to assess writing, but she also shows the significance of these types of scoring systems to the professional development of teachers of writing.

We would also like to direct you to some of the recent JWA Reading List reviews from this past year. The "JWA Reading List" provides focused reviews of publications important to writing assessment and contextualizes their relevance to writing assessment practitioners. Recent reviews include Teaching the New Writing: Technology, Change, and Assessment in the 21st Century Classroom edited by Herrington, Hodgson and Moran (Teachers College Press, 2009); Writing Assessment in the 21st Century: Essays in Honor of Edward M. White edited by Elliot and Pereleman (Hampton Press, 2012); and Race and Writing Assessment edited by Inoue and Poe (Peter Lang, 2012). If you would like to do a JWA Reading List review, please consult the submission guidelines.

The Journal of Writing Assessment has a call for manuscripts in response to the writing assessments connected to the Common Core State Standards. Please read more about this call for scholarship here. We are very interested in publishing thoughtful and well-researched scholarship related to the quickly emerging assessment mandates.

We would like to thank reviewers who volunteered their time and expertise to review and comment on manuscripts submitted to JWA. We are indebted to their generosity and hard work:

Beverly Chin, University of Montana
Patricia Freitag Ericsson, Washington State University
Brian French, Washington State University
SusanMarie Harrington, University of Vermont
Sandra Murphy, University of California-Davis
Michael Neal, Florida State University
Louise Weatherbee Phelps, Syracuse University
Ellen Schendel, Grand Valley State University
Gail Shuck, Boise State University
Tony Silva, Purdue University
Lorrie Shepard, University of Colorado
Melanie Sperling, University of California Riverside
Sherry Swain, National Writing Project
Kathleen Blake Yancey, Florida State University

We would also like to thank Jessica Nastal-Dema, the Associate Editor of the Journal of Writing Assessment, for her significant contributions to JWA, and to Tialitha Macklin, our editorial assistant for on-going support.

Acknowledgements and thanks also go to the Department of English at the University of Idaho for assuming financial support of the Journal of Writing Assessment. We greatly appreciate the University of Idaho's commitment to support an independent journal that publishes scholarship done by and for researchers and teachers of writing. Finally, thanks to the scholars who supported JWA through their submissions and to the readers. Please contact us if you have any questions, concerns or ideas for JWA.