Volume 13, Issue 1: 2020

Editors Introduction, Volume 13, Issue 1

by Diane Kelly-Riley and Carl Whithaus


In this issue, the contributors explore intersections among the constructs of writing, grading, and assessing writing as well as the social impact of writing assessment on test takers, students, and teachers. The authors in this issue of the Journal of Writing Assessment (JWA) are particularly attentive to how we account for the multiple dimensions of writing development and performance. In the opening article, Vivian Lindhardsen considers how our views of validity might shift when we situate writing assessment practices in more communal processes. Extending these ideas about writing assessment being embedded within communities, Linda Friedrich and Scott Strother report on the National Writing Project’s development of a rubric for civically engaged writing. Vicky Chen, Carol B. Olson, and Huy Quoc Chung’s article also focuses on secondary students’ writing; they examine the relationships among on-demand analytical essay writing, the construct of writing, and a large scale standardized assessment. Dana Ferris and Amy Lombardi report on a pilot study in which they collaborate with multilingual writers to assess placement into the first-year writing curriculum using various sources of information.  The continued influence of large scale standardized assessments, and the possibility of automating essay scoring, continues to trouble Les Perelman. In his article, Perelman warns of the dangers of using Automated Essay Scoring (AES) software uncritically. He provides an overview of the current state of affairs of AES and responds to Aoife Cahill, Martin Chodorow, and Michael Flor’s defense of e-rater in the Journal of Writing Analytics (JoWA). Finally, Shane Wood’s book review essay turns to the emergence of contract grading with postsecondary writing assessment and teaching practices. Focused on Asao B. Inoue’s Labor-Based Grading Contracts, Wood discusses how postsecondary writing instructors and writing program administrators are developing assessment systems that reflect our growing understanding of writing constructs as embedded within the social contexts and discourse communities where they are created. Wood’s work, along with the other four essays in this issue of JWA speak to the ways in which the fields of writing assessment and composition studies are engaging with developments in how we approach educational and psychological measurement activities.

Changes in writing assessment practices often involve rethinking writing assessment on a system level. Grading or assessing writing within a single classroom has consequences for the students in that room and has traditionally been focused on the classroom instructor as responder and rater. In this issue’s first article, “Co-equal Participation and Accuracy Perceptions in Communal Writing Assessment,” Vivian Lindhardsen examines the concept of validity situated in the communal writing assessment practices, which she describes as “a practice that involves at least two raters collaborating to reach a final assessment.”  Such a process requires raters to consult with each other on the final evaluation; this requires them to mutually understand the construct they assess. Lindhardsen’s work extends the field’s conversations about the construct of writing and its social dimensions. She explores the dynamics of whether “communal writing assessments indeed demonstrate the kind of deliberate democracy that is sought after in communal assessments…[and to] uncover signs of asymmetry in experienced CWA interactions, as well as raters’ own perceptions of how accurate CWA scores are.”  Her research has potential to speak to developments not only in contract grading but also in approaches to writing assessment that consider the communal, the social, and the contextual.

Next, Linda Friedrich and Scott Strother, both researchers at the nonprofit agency, WestEd, continue this examination of writing assessment as embedded within communities and within writers’ experiences. They explore the challenges of representing and measuring the construct of civically engaged writing in “Measuring Civic Writing: The Development and Validation of the Civically Engaged Writing Analysis Continuum.”  Their work explores the National Writing Project’s Civically Engaged Writing Analysis Continuum, “an analytic writing rubric that assesses youth’s ability, both in academic and extracurricular settings, to engage in civic arguments about issues that are meaningful to them and their communities.” Friedrich and Strother note the multiple, public, and online contexts in which students engage in political discourse.  This writing context is distinct from traditional academic genres of essays and reports, and they wanted to explore ways in which to capture engagement in “any public writing for an audience beyond the writer’s immediate family and friends that focuses on civic issues of significance to the writer, the community, or the public.”  Their work leads to an understanding of the construct of writing as one that employs a public voice and one that advocates for civic engagement or action.

While Vicky Chen, Carol B. Olson, and Huy Quoc Chung’s examination of a large scale standardized assessment contrasts with Friedrich and Strother’s review of a rubric developed to include contextual and civically-engaged elements, the two articles are also complementary. That is, Chen et al. investigate the ways in which a large scale standardized assessment embodies the construct of writing through the way proficiency is determined in secondary students’ on-demand analytical essay writing. Given the widespread adoption of the Common Core State Standards, it is important to determine the variables included in this particular construct of writing. Chen et al. consider how social elements are both included and excluded in current writing assessment systems. The researchers define proficiency in terms of two nationally administered assessments: the National Assessment of Educational Progress (NAEP) and the writing portion of the Smarter Balanced Assessment Consortium (SBAC), one of the assessments used to evaluate performance on the Common Core State Standards.  The researchers applied this definition of “proficiency” to  student essays from the Pathway reading and writing intervention in California middle and high schools. They find “that the writing of students just below the level of proficiency as defined by such assessments straddles the line between pure knowledge-telling and the beginnings of knowledge-transformation, trying to move beyond a model of writing that simply involves retrieving and regurgitating information.” The authors note “what truly differentiates proficient from non-proficient writers is making a bridge from knowledge-telling to knowledge-transformation.” That insight resonates powerfully with the increasing emphasis on fair and ethical uses of writing assessment in educational and psychological measurement circles.

Dana Ferris and Amy Lombardi’s “Placement of Multilingual Writers: Combining Formal Assessment and Self-Evaluation” continues this exploration of how writing assessments can be used in fair and ethically responsible ways. They report on the particular challenges of placement of multilingual students into postsecondary first-year writing courses. Considered from the perspective of multilingual students--many of whom are international students--the authors piloted a new collaborative system that relied on institutional placement information combined with directed self-placement (DSP) methods. Their collaborative assessment program challenges critiques of DSP by providing evidence that course outcomes can be maintained through a collaborative assessment program that allows for student agency. By ”giving students a voice in their own placement”  the assessment and placement model that Ferris and Lombardi advocate for seemed to contribute to students’ “overall satisfaction with the process.” Ferris and Lombardi are not only advocating for DSP but rather advancing the model of student agency can function within a placement system. That is, student input can contribute to increasing the fairness of institutional decisions about the best paths forward for students as they enter postsecondary study. The questions that Ferris and Lombardi explore are not only related to fairness but also address issues around the validity of a writing assessment system used for placement.

Questions about the valid use of writing assessments are not limited to systems that rely on human raters. Indeed, developments in software, Natural Language Processing (NLP), and Latent Semantic Analysis (LSA) have led to vigorous discussion about the potential of Automated Essay Scoring (AES) systems. In “The BABEL Generator and E-Rater: 21st-Century Writing Constructs and Automated Essay Scoring,” Les Perelman takes stock of the current state of AES.  Perleman writes in response to Aoife Cahill, Martin Chodorow, and Michael Flor’s article in the Journal of Writing Analytics, “Developing an E-Rater Advisory to Detect Babel-Generated Essays.” Perleman notes that he developed and reported on BABEL to demonstrate the limitations in current constructions of AES platforms. Perelman’s original research (link to JWA article) showed limitations in AES platforms and called into question the use of this type of scoring in high stakes, large scale assessments. Cahill et al.’s (2018) response does not address the critique that Perelman put forward initially. It is almost that, if Perelman’s initial critique of AES claimed, “The emperor has no clothes,” Cahill et al.’s response is, “Look over there!” They have solved the problems created for e-rater by BABEL, but they have not solved the underlying problems with e-rater or AES systems more broadly. Most notably, Perelman’s research discontinued the use of the SAT Writing portion as a required component of the SAT; the SAT Writing portion is an option but not a required portion of the exam. Perelman’s research also informed the Australian Education System’s decision to halt their use of AES in high stakes testing.  Perleman continues his critique of Automated Essay Scoring noting these programs still rely on algorithms to approximate a writing construct.  They do not evaluate students’ abilities to make meaning; instead, they determine the extent to which students can utilize features that appear to make writing more complex. Testing preparation companies can utilize what they have learned about the features of AES to provide students strategies to take these types of assessments.  These strategies do not necessarily yield students who are better writers.

Finally, the issue concludes with Shane Wood’s review essay that responds to Asao B. Inoue’s Labor-Based Grading Contracts: Building Equity and Inclusion in the Writing Classroom. Going beyond a book review, Wood provides an overview of contract grading and situates his essay and Inoue’s book within developing discussions around contract grading and how writing constructs are embedded within social situations.  Wood notes that Inoue developed labor-based grading contracts to mitigate inequitable structures within assessment practices in postsecondary settings.  Wood argues that “Labor-Based Grading Contracts is an important theoretical contribution to scholarship on assessment and is useful to writing teachers wishing to implement an alternative classroom assessment model and/or be challenged to see how grading is tied to dominant standards of language that privilege some student identities over others.”  Certainly, Inoue’s work challenges traditional concepts of validity and encourages us to investigate these approaches and their effects on students more thoroughly.

JWA has been at the forefront of these discussions about developing fair and ethical writing assessments. The Special Issues we have produced on these topics have engaged with the emerging research and scholarship on validity, reliability, and fairness within educational measurement. Indeed, the last 20 years have seen remarkable changes within psychometrics. The 1999 edition of the Standards for Educational and Psychological Testing shifted the concept of validity to one that considered the use and interpretation of test scores in particular settings. As a result, the ways researchers and test developers account for construct robustness of writing has evolved, too. The writing assessments are increasingly looked at through the lens of use validity. That is, an assessment’s validity is not only in what it measures, but in how it is used—how it impacts students and test takers. The articles in this issue of JWA are engaged in the process of working out the significance of these shifts within educational measurement and writing assessment as well as considering their impacts on writing instruction. The move to consider the validity of an assessment to one situated within its context and the effect of test taking is still unfolding, but it is particularly important to understand how the construct of writing is represented across contexts and how the construct of writing may have multiple dimensions.

JWA plans to remain engaged with this evolving conversation and will publish a special issue later in 2020 exploring contract grading. This is an emerging assessment area that holds great potential, and we have an obligation to investigate this approach through the lens of the Standards definition of validity: to consider the use and interpretation in particular settings. This special issue is being co-edited with Asao B. Inoue.

Finally, we would like to note the many individuals and institutions that support our work. We are grateful for the generosity and expertise of our reviewers:

Christopher Dean, University of California, Santa Barbara

Merideth Garcia, University of Wisconsin, La Crosse

Steve Graham, Arizona State University

Richard Haswell, University of Texas, Corpus Christi

Aja Martinez, Syracuse University

Maureen McBride, University of Nevada, Reno

Jill McClay, University of Alberta

Doug McCurry, Australian Council for Educational Research

Katrina Miller, University of Nevada, Reno

Patricia Portanova, Northern Essex Community College

Aparna Sinha, California State University Maritime

Sherry Swain, National Writing Project

Christie Toth, University of Utah

Amy Vidali, University of California Santa Cruz

Sara Cushing Weigle, Georgia State University

We also appreciate the generosity and hard work of our editorial team:

Tialitha Macklin, Associate Editor and JWA Reading List Editor, Boise State University

Gita DasBender, Assistant Editor, New York University

Katherine Kirkpatrick, Assistant Editor, Copyediting, Clarkson College

Stephen McElroy, Assistant Editor, Production, Babson College

Johanna Phelps-Hillen, Digital Archivist and JWA Reading List Editor, Washington State University, Vancouver 

Mathew Gomes, Indexer and Social Media Coordinator, Santa Clara University

Stacy Wittstock, Editorial Assistant, University of California, Davis

Skyler Meeks, JWA Reading List Editorial Assistant, Utah Valley University

We could not publish the Journal of Writing Assessment or the Journal of Writing Assessment Reading List without their dedication and support.

Lastly, we appreciate the financial support of the College of Letters, Arts, and Social Sciences and the English Department at the University of Idaho for their continued financial support of the Journal of Writing Assessment. This financial support ensures JWA remains an independent journal that publishes scholarship by and for teachers and scholars of writing assessment. We would also like to thank the University Writing Program of the University of California, Davis for support of the JWA Reading List. We note that open access publishing has largely evolved into pay to publish enterprises.  That is, journals and publishing houses are shifting the cost of publishing scholarship to individual scholars and/or their institutions.  We note that our institutions recognize our important work for the journal, and no author is required to pay to publish in JWA.