Welcome to the Journal of Writing Assessment

Check out JWA's Reading List for news, announcements, and reviews of relevant writing assessment publications.

The Journal of Writing Assessment provides a peer-reviewed forum for the publication of manuscripts from a variety of disciplines and perspectives that address topics in writing assessment. Submissions may investigate such assessment-related topics as grading and response, program assessment, historical perspectives on assessment, assessment theory, and educational measurement as well as other relevant topics. Articles are welcome from a variety of areas including K-12, college classes, large-scale assessment, and noneducational settings. We also welcome book reviews of recent publications related to writing assessment and annotated bibliographies of current issues in writing assessment. Please refer to the submission guidelines on this page for information for authors and submission guidelines.

The Journal of Writing Assessment online ISSN 2169-9232.


The Journal of Writing Assessment is proud and appreciative of the support of the following organizations:

DEPARTMENT OF ENGLISH

COLLEGE OF LETTERS, ARTS & SOCIAL SCIENCES

Volume 13, Issue 1: 2020

Editors Introduction, Volume 13, Issue 1

by Diane Kelly-Riley and Carl Whithaus

Co-equal Participation and Accuracy Perceptions in Communal Writing Assessment

by Vivian Lindhardsen, Teachers College, Columbia University

The present study examines the extent of raters’ co-equal engagement and accuracy perceptions in a communal writing assessment (CWA) context, where raters collaborate to reach final scores on student scripts. Results from recorded discussions between experienced CWA raters when they deliberated to reach a final score supplemented with their retrospective reports show that, although some raters were more verbose than their co-raters, they displayed signs of co-equal engagement and reached what they perceived to be the most accurate scores possible for the student scripts. This study supports a hermeneutic approach to examining validity in writing assessment.

Keywords: CWA, score accuracy, hermeneutic, co-equal, collaborative assessment, score negotiation.

Measuring Civic Writing: The Development and Validation of the Civically Engaged Writing Analysis Continuum

by Linda Friedrich, WestEd and Scott Strother, West Ed

As youth increasingly access the public sphere and contribute to civic life through digital tools, scholars and educators are rethinking how civically engaged writing is taught, nurtured, and assessed. This article presents the conceptual underpinnings of the National Writing Project’s Civically Engaged Writing Analysis Continuum (CEWAC), a new tool for assessing youth’s civically engaged writing. It defines four attributes of civically engaged writing using qualitative analysis of expert interviews and literature: employs a public voice, advocates civic engagement or action, argues a position based on reasoning and evidence, and employs a structure to support a position. The article also presents reliability and validity evidence for CEWAC. The study finds that CEWAC has a moderate to high level of exact agreement and a high level of exact or adjacent agreement. Covariation analyses showed that, even with similar scoring patterns, CEWAC’s attributes hold at least a moderate level of independence. This evidence, coupled with robust qualitative evidence around reliability and validity, establish CEWAC’s strong technical properties. The findings suggest that CEWAC can be used both in research and in the classroom to make visible attributes of civically engaged writing often overlooked in traditional assessment frameworks.

Keywords: public writing, civic engagement, writing assessment, rubric, reliability

Understanding Proficiency: Analyzing the Characteristics of Secondary Students’ On-Demand Analytical Essay Writing

by Vicky Chen, University of California, Irvine; Carol B. Olson, University of California, Irvine; Huy Quoc Chung, University of California, Irvine

This study investigated the different characteristics of not-pass (n = 174), adequate-pass (n = 173), and strong-pass (n = 114) text-based, analytical essays written by middle and high school students. Essays were drawn from the 2015-2016 Pathway writing and reading intervention pretests and posttests. Results revealed the use of relevant summary was an important difference between not-pass and adequate-pass essays where significantly more adequate-pass essays used summary in a purposeful rather than general way. In contrast, major characteristics that set apart strong-pass essays from adequate-pass essays involved providing analysis and including a clear conclusion or end. Factors that affected these characteristics such as whether the writer made claims and comments about the text are discussed, and some instructional strategies are suggested.

Keywords: writing proficiency, writing instruction, adolescent literacy, text-based analytical writing, on-demand writing assessment

The BABEL Generator and E-Rater: 21st Century Writing Constructs and Automated Essay Scoring (AES)

by Les Perelman, Massachusettes Institute of Technology

Automated essay scoring (AES) machines use numerical proxies to approximate writing constructs. The BABEL Generator was developed to demonstrate that students could insert appropriate proxies into any paper, no matter how incoherent the prose, and receive a high score from any one of several AES engines. Cahill, Chodorow, and Flor (2018), researchers at Educational Testing Service (ETS), reported on an Advisory for the e-rater AES machine that can identify and flag essays generated by the BABEL Generator. This effort, however, solves a problem that does not exist. Since the BABEL Generator was developed as a research tool, no student could use the BABEL Generator to create an essay in a testing situation. However, testing preparation companies are aware of e-rater’s flaws and incorporate the strategies designed to help students exploit these flaws. This test prep does not necessarily make the students stronger writers just better test takers. The new version of e-rater still appears to reward lexically complex, but nonsensical essays demonstrating that current implementations of AES technology continue to be unsuitable for scoring summative, high stakes writing examinations.

Keywords: Automated essay scoring (AES), BABEL generator, writing constructs, writing assessments, fairness

Book Review: Labor-Based Grading Contracts: Building Equity and Inclusion in the Writing Classroom by Asao B. Inoue

by Shane A. Wood, University of Southern Mississippi

Grading writing, or judging language, can be difficult. Asao B. Inoue’s Labor-Based Grading Contracts problematizes traditional assessment practices that assess writing “quality.” Inoue explains how this type of practice operates to reproduce White supremacy because language standards are tied to historical White racial formations. He suggests an alternative assessment method (e.g., grading contracts) that is based on labor and compassion. If you find yourself dissatisfied with classroom grading practices or wanting to understand how writing assessment can be constructed to do social justice work, then Inoue’s Labor-Based Grading Contracts is a great read.

Keywords: grading contracts, race, writing assessment, labor

Editorial Board 2020