Volume 13, Issue 1: 2020
Editors Introduction, Volume 13, Issue 1
by Diane Kelly-Riley and Carl Whithaus
Co-equal Participation and Accuracy Perceptions in Communal Writing Assessment
by Vivian Lindhardsen, Teachers College, Columbia University
The present study examines the extent of raters’ co-equal engagement and accuracy perceptions in a communal writing assessment (CWA) context, where raters collaborate to reach final scores on student scripts. Results from recorded discussions between experienced CWA raters when they deliberated to reach a final score supplemented with their retrospective reports show that, although some raters were more verbose than their co-raters, they displayed signs of co-equal engagement and reached what they perceived to be the most accurate scores possible for the student scripts. This study supports a hermeneutic approach to examining validity in writing assessment.
Keywords: CWA, score accuracy, hermeneutic, co-equal, collaborative assessment, score negotiation.
Measuring Civic Writing: The Development and Validation of the Civically Engaged Writing Analysis Continuum
by Linda Friedrich, WestEd and Scott Strother, West Ed
As youth increasingly access the public sphere and contribute to civic life through digital tools, scholars and educators are rethinking how civically engaged writing is taught, nurtured, and assessed. This article presents the conceptual underpinnings of the National Writing Project’s Civically Engaged Writing Analysis Continuum (CEWAC), a new tool for assessing youth’s civically engaged writing. It defines four attributes of civically engaged writing using qualitative analysis of expert interviews and literature: employs a public voice, advocates civic engagement or action, argues a position based on reasoning and evidence, and employs a structure to support a position. The article also presents reliability and validity evidence for CEWAC. The study finds that CEWAC has a moderate to high level of exact agreement and a high level of exact or adjacent agreement. Covariation analyses showed that, even with similar scoring patterns, CEWAC’s attributes hold at least a moderate level of independence. This evidence, coupled with robust qualitative evidence around reliability and validity, establish CEWAC’s strong technical properties. The findings suggest that CEWAC can be used both in research and in the classroom to make visible attributes of civically engaged writing often overlooked in traditional assessment frameworks.
Keywords: public writing, civic engagement, writing assessment, rubric, reliability
Understanding Proficiency: Analyzing the Characteristics of Secondary Students’ On-Demand Analytical Essay Writing
by Vicky Chen, University of California, Irvine; Carol B. Olson, University of California, Irvine; Huy Quoc Chung, University of California, Irvine
This study investigated the different characteristics of not-pass (n = 174), adequate-pass (n = 173), and strong-pass (n = 114) text-based, analytical essays written by middle and high school students. Essays were drawn from the 2015-2016 Pathway writing and reading intervention pretests and posttests. Results revealed the use of relevant summary was an important difference between not-pass and adequate-pass essays where significantly more adequate-pass essays used summary in a purposeful rather than general way. In contrast, major characteristics that set apart strong-pass essays from adequate-pass essays involved providing analysis and including a clear conclusion or end. Factors that affected these characteristics such as whether the writer made claims and comments about the text are discussed, and some instructional strategies are suggested.
Keywords: writing proficiency, writing instruction, adolescent literacy, text-based analytical writing, on-demand writing assessment
Collaborative Placement of Multilingual Writers: Combining Formal Assessment and Self-Evaluation
by Dana Ferris, University of California, Davis; Amy Lombardi, University of California, Davis
Placement of multilingual writers within writing programs is an important and challenging issue. If students perceive that the placement process is rigid and unfair, this perception may affect their attitudes and motivation levels while taking courses in the writing program. The purpose of this study was to see whether a specific subgroup of students (n = 65) in a large university writing program for multilingual students could be successful if allowed to collaborate, with guidance, in their own placement. Various data were collected about these students in their first quarter after matriculating in the writing program: instructors’ initial ratings, students’ outcomes in their initial course (final portfolio scores and course grades), and students’ satisfaction levels with their placement after they had completed the course (via a brief survey). These data were compared to another group of students (n = 65) who received similar placement scores but were not given the choice to move up or down a level. Findings indicated the pilot group was able to succeed at their chosen course level at levels comparable to the comparison group, and they were happy with their placement choices. Implications for placement processes in multilingual writing programs are discussed.
Keywords: multilingual writers, writing assessment, directed self-placement, placement processes, student agency
The BABEL Generator and E-Rater: 21st Century Writing Constructs and Automated Essay Scoring (AES)
by Les Perelman, Massachusettes Institute of Technology
Automated essay scoring (AES) machines use numerical proxies to approximate writing constructs. The BABEL Generator was developed to demonstrate that students could insert appropriate proxies into any paper, no matter how incoherent the prose, and receive a high score from any one of several AES engines. Cahill, Chodorow, and Flor (2018), researchers at Educational Testing Service (ETS), reported on an Advisory for the e-rater AES machine that can identify and flag essays generated by the BABEL Generator. This effort, however, solves a problem that does not exist. Since the BABEL Generator was developed as a research tool, no student could use the BABEL Generator to create an essay in a testing situation. However, testing preparation companies are aware of e-rater’s flaws and incorporate the strategies designed to help students exploit these flaws. This test prep does not necessarily make the students stronger writers just better test takers. The new version of e-rater still appears to reward lexically complex, but nonsensical essays demonstrating that current implementations of AES technology continue to be unsuitable for scoring summative, high stakes writing examinations.
Keywords: Automated essay scoring (AES), BABEL generator, writing constructs, writing assessments, fairness
Book Review: Labor-Based Grading Contracts: Building Equity and Inclusion in the Writing Classroom by Asao B. Inoue
by Shane A. Wood, University of Southern Mississippi
Grading writing, or judging language, can be difficult. Asao B. Inoue’s Labor-Based Grading Contracts problematizes traditional assessment practices that assess writing “quality.” Inoue explains how this type of practice operates to reproduce White supremacy because language standards are tied to historical White racial formations. He suggests an alternative assessment method (e.g., grading contracts) that is based on labor and compassion. If you find yourself dissatisfied with classroom grading practices or wanting to understand how writing assessment can be constructed to do social justice work, then Inoue’s Labor-Based Grading Contracts is a great read.
Keywords: grading contracts, race, writing assessment, labor
Editorial Board 2020