Volume 11, Issue 1: 2018

Editors’ Introduction (Fall 2018)

by Diane Kelly-Riley and Carl Whithaus


 This latest volume of the Journal of Writing Assessment (JWA) examines how assessment, response, and feedback work within both high school and college settings. Our authors explore topics including the effects of teacher response on students; faculty emotional responses to student writing within programmatic assessments; the observed growth of undergraduate writers; and ways to understand writing assessment through both theoretical lenses and mixed methods approaches. These articles demonstrate the wide ranging impact of writing assessment research across venues.

As editors, we remain committed to keeping this scholarship accessible to a variety of stakeholders-- not behind paywalls. Whether an article examines classroom-, institutional-, state-, or national-level writing assessment issues, it is crucial that this work be available for researchers and accessible to the general public. JWA is committed to an open-access model of peer reviewed, scholarly publication. 

In July 2018, we were reminded of the reach of scholarship published in JWA with the recent publication of “Automated Scoring Remains an Empty Dream” in Forbes. The ongoing debate about automated writing evaluation (AWE) had moved from a scholarly debate into a wider forum. In Forbes and other news sources, a discussion emerged about the limitations of artificial intelligence (AI) for supporting AWE. Writing in Forbes, Peter Green noted that the challenges to AWE had largely been identified in Les Perelman’s work, published by JWA in 2013. We were delighted for the recognition that our authors received and for the ways in which JWA research contributed to a wider understanding of the debates about software as an assessment and response tool for student writers.

Volume 11 continues this focus on public engagement for writing assessment scholarship. In the first article, “Write Outside the Boxes: The Single Point Rubric in the Secondary ELA Classroom,” Jenna Wilson, an English Teacher at Westville High School in Westville, IL, explores the use of a Single Point Rubric as a way to provide teacher agency within strict local and state mandated assessment programs. She argues that the single point rubric

combines the formulaic and time-saving components of rubrics with the differentiated and individualized components of narrative response and grading via detailed feedback…[and it is] a tool that has largely been neglected for teachers who desire to individualize writing assessment while remaining concise and efficient.

Wilson’s article suggests the value of localized writing assessments in ways that could be taken up further by writing assessment researchers and by education policy makers at the state level.

Continuing the focus on writing assessment in the classroom, Darsie Bowden considers how students responded to instructor comments on academic papers written within a first-year college writing program. In “Comments on Student Papers: Student Perspectives,” Bowden acknowledges her research began out of frustration when working with first-year writing instructors and explaining how undergraduate students receive faculty comments. She wondered how the instructors’ comments actually impacted the students. In her carefully researched piece, Bowden notes that students welcomed instructor comments, and they were eager to figure out what to do about the directions, and most students were particularly grateful for substantive comments. Her work adds to our understanding of response and feedback issues as well as to our knowledge base about writing assessment. Bowden’s work highlights the ways in which assessment can be in dialogue with issues of pedagogy and learning.

While it is essential to understand the classroom and the dynamics between instructors and students within a course, writing assessment also helps assess the effectiveness of educational programs. In “Helping Faculty Self-Regulate Emotional Responses in Writing Assessment: Use of an ‘Overall Response’ Rubric Category,” Michelle E. Neely, examines the tension of involving faculty in on-going assessment work for programmatic purposes and faculty’s emotional responses to student writing especially when it is not their own students. Programmatic assessment settings present different challenges to faculty raters as they are not connected to the faculty member’s classroom. Neely argues that these “assessment contexts provide a different challenge for us to understand and explore faculty members’ emotional responses.” In her article, Neely reviews work about teacher beliefs and their influence on practice, strategies used to provide scorer support within group contexts, and the influence of teachers’ emotions on evaluation tasks. She reports that the use of an ‘overall response’ rubric category added to the rubric may ease some of these challenges that faculty writing assessment scorers face. Neely’s work with scorers’ emotional responses contributes to ongoing discussions about programmatic assessment through the concrete development of a rubric category.

Understanding how programmatic assessment functions is crucial if we are to measure the impacts of curriculum on student learning, which is particularly pressing if a college or university wants to consider how students are transferring knowledge about writing from one area, or one level, in their curriculum to another. In “Argument Essays Written in the 1st and 3rd Years of College: Assessing Difference in Performance,” Irene Clark and Bettina Huber take up this challenge by focusing on ways to measure writing performance and transfer at a large, urban, Hispanic serving institution between the first and third year of an undergraduate curriculum. Their work applies recent scholarship on transfer to thesis-driven writing done within their institution’s multi-year Learning Habits Project, a a decade long project that tracked the performance of newly enrolled undergraduates. Clark and Huber’s work contributes not only to writing assessment but also to the large body of research on transfer within writing studies.

In “Slouching Toward Sustainability: Mixed Methods in the Direct Assessment of Student Writing,”  Jeff Pruchnic, Chris Susak, Jared Grogan, Sarah Primeau, Joe Torok, Thomas Trimble, Tania Foster, and Ellen Barton also consider how writing assessment is working on the level of a writing program. They provide a model that piloted a mixed methods approach to writing assessment that uses both quantitative and qualitative methodology to effectively capture writing performance done within their composition program. They assert their approach “(a) directly assess[es] a representative sample of student writing with excellent reliability, (b) significantly [reduces] total assessment time, and (c) [preserves] the autonomy and contextualized quality of assessment sought in current definitions of validity.” Pruchnic et al.’s work speaks to the programmatic assessment issues in ways that complement the research on scorers’ emotions in Neely’s article and on knowledge transfer in Clark and Huber’s article. These pieces are applied knowledge at the program level.

Extending the theoretical work in JWA’s Special Issue on a Theory of Ethics for Writing Assessment (2016, 9.1), Joshua Lederman applies Michael Kane’s recent theories about argument-based validation to the ways in which writing is theorized in a post-process era and argues researchers, teachers, and assessment developers need to attend more to concerns about social justice and equity. In “Writing Assessment Validity: Adapting Kane’s Argument-Based Validation Approach to the Assessment of Writing in the Post-Process Era,” Lederman explores the implications of the 2014 edition of the Standards for Educational and Psychological Testing. The new edition of the Standards elevated validity as a primary concern for developing and evaluating tests as well as concerns about the impacts on different test takers. He argues that testing is a distinct context from writing assessment, and that Kane’s work provides an important bridge between the two. Developing an understanding of how assessment and testing may occupy different spheres is a vital conversation for writing assessment researchers and composition studies experts to have.

Finally, in “Recommendations or Choices? A Review of Decisions, Agency, and Advising,” Kristen di Gennaro examines Tanita Saenkhum’s Decisions, agency, and advising: Key issues in the placement of multilingual writers into first-year composition courses. Logan, UT: Utah State University Press (2016). Di Gennaro observes,

this book is a timely addition to conversations about writing placement for multilingual learners in US higher education...Saenkhum proposes complementing standardized test scores with increased student agency, and her book provides an effective case study and materials illustrating how such integration can be implemented.

Saenkhum’s Decisions, agency, and advising and di Gennaro’s review highlight the need for examining writing assessments and their use. As the most recent issue of Standards for Educational and Psychological Testing (2014) makes clear, assessment systems need to consider the use that tests are put--an assessment system needs to strive for validity, reliability, and fairness in use as well as in theory. di Gennaro’s and Saenkhum’s works show how that does--and does not--happen within a context that includes students and advisors as well as teachers and test developers.

As ever, the Journal of Writing Assessment relies on a large team to bring you this excellent scholarship. We are indebted to the JWA Editorial Team: Associate Editor, Jessica Nastal-Dema of Prairie State College; Associate Editor, Tialitha Macklin of Boise State University. They also have assumed editorial leadership for the JWA Reading List, which highlights emerging writing assessment scholarship. We are grateful to Assistant Editor, Gita Das Bender from New York University for her coordination of reviews and reviewers; Digital Archivist, Johanna Phelps-Hillen of Washington State University Vancouver for her work on organizing and archiving JWA’s extensive files; Social Media Coordinator and Indexer, Mathew Gomes of Santa Clara University for his work on communicating with external audiences; Technology Coordinator, Stephen McElroy of Florida State University for his work producing and publishing JWA articles.  We are also grateful for the detail-oriented and careful work of our Editorial Assistants, Stacy Wittstock of University of California, Davis; Katherine Kirkpatrick of Clarkson College; and Skyler Meeks of Utah Valley University. All of these positions are volunteer and we are grateful for their generous donation of their time and expertise.   

JWA relies on legions of reviewers who carefully read and responded to many manuscripts this year. As editors we are grateful for their generosity with their expertise and time. We are thankful for their service contributions to the field. Reviewers this year included:

Jacob Babb, Indiana University Southeast

Laura Ballard, Center for Applied Linguistics

Khalid Barkaoui, York University

Bob Broad, Illinois State University

Chris Gallagher, Northeastern University

Aline Godfroid, Michigan State University

James Hammond, University of Michigan

Hogan Hayes, California State University, Sacramento

Rebecca Howell, Charleston Southern University

Emily Isaacs, Montclair State University

Kate McConnell, AACU

Jessie Moore, Elon University

Irvin Peckham, Drexel University

Barbara Roswell, Goucher College

Clay Spinuzzi, University of Texas

Maja Wilson, University of Maine

Carolyn Wisniewski, University of Illinois Urbana-Champaign

Finally, we would like to thank the College of Letters, Arts, and Social Sciences and the English Department at the University of Idaho for their continued financial support of the Journal of Writing Assessment. This financial support ensures that JWA remains an independent journal that publishes scholarship by and for teachers and scholars of writing assessment. We would also like to thank the University Writing Program of the University of California, Davis for support of the JWA Reading List