Volume 9, Issue 2: 2016

Editors Introduction: Volume 9 Issue 2

by Diane Kelly-Riley and Carl Whithaus

Editors Introduction to Volume 9 Issue 2


We’re excited to close out 2016 with this collection of provocative and compelling articles that enrich our understanding of writing assessment through a variety of perspectives and approaches. This issue presents case studies of writing assessment in classrooms, institutional contexts, and embedded within contexts that reach beyond borders. The authors in this issue present new research that theorizes ways in which to extend how and what we assess when we talk about writing. All of these articles challenge readers to reconsider the status quo of current mainstream assessment practices.

 

In the first article, “Globalizing Plagiarism and Writing Assessment:  A Case Study of Turnitin,” Jordan Canzonetta and Vani Kannan critique the work of plagiarism detection services, like Turnitin, that move to capitalize on the capabilities of automated essay scoring.  Such companies claim that plagiarism detection software can be modified to provide automated assessment and feedback of student writing in ways that parallel the feedback of human instructors. Canzonetta and Vannan argue the structures of such technologies will shape labor practices (as these technologies are primarily marketed to help overworked teachers and adjuncts), reinforce narrow constructs of writing, and cement certain cultural conceptions about writing and authorship.  They warn, “debates on assessment must attend to the definitions of plagiarism and authorship that are being implemented globally by Turnitin....As educators and researchers of writing, we must think deeply and critically about the links between automation, plagiarism, and assessment, and foreground the global implications of automated plagiarism and assessment protocols.”  Canzonetta and Kannan argue the constructs used by plagiarism detection services position these businesses to shape and profit from postsecondary writing settings.  Previous research published by JWA presented early findings about the possibilities and limits to automated essay scoring, and some scholarship has emerged from the automated scoring deployed in the Common Core State State Standards assessments.  Canzonetta and Kannan present a compelling call for additional work on the ways in which software development is now shaping not only scoring but also feedback and students’ work with sources.

 

Next, Michelle Neely describes her project to pilot and validate a tool to measure rhetorical knowledge across disciplines in her article, “Recognizing Multiplicity and Audience across the Disciplines:  Developing a Questionnaire to Assess Undergraduates’ Rhetorical Writing Beliefs.”  While there has been work to qualitatively account for students’ rhetorical writing beliefs, there has not been an empirically based methodology to measure this in student writing.  Neely’s project explores and validates the use of the Measure of Rhetorical Beliefs tool as a way to track the rhetorical knowledge students take with them from one setting to the next.  Such a tool supplements our understanding of transfer of written abilities across contexts. Neely provides additional directions for researchers interested in expanding how we measure students’ rhetorical beliefs about writing.

 

In “Multimodal Assessment as Disciplinary Sensemaking:  Beyond Rubrics to Frameworks,” Ellery Sills explores the the ways in which multimodality can help evolve conceptualizations of writing assessment.  Many in composition studies are intrigued and challenged by the exciting and evolving multimodal work in our field, particularly with considerations of assessment.  Sills takes up the question of how to manage the competing demands of assessment and the different dimensions opened up by multimodality.  Sills argues the “concept of sensemaking offers a possible first step to integrating pedagogical, programmatic, and disciplinary scales…[which] offer strategies to cultivate institutional ecologies of multimodal assessment.”  This piece theorizes the possibilities in multimodal assessment, and the article offers ways to think about multimodality and writing assessment in classroom practice.  

 

“Contract Grading in a Technical Writing Classroom:  A Case Study,” adds to the emerging scholarship on alternatives to traditional grading practices, for which there is a great deal of interest.  Much of the notable work on contract grading has been situated within developmental writing or first-year composition courses (see Danielewicz and Elbow; Inoue; Shor), and Lisa M. Litterio argues for the appropriateness of contract grading in technical writing courses.  As a teacher-researcher, she explores and documents the challenges of adapting contract grading for her technical writing classroom.  This article deftly chronicles tensions and dynamics at play for students as they participate in this shared writing assessment system.  Litterio provides implications for others considering adaptation of contract grading in their technical writing courses.

 

The final article in this issue is “ePortfolios:  Foundational Measurement Issues” by Norbert Elliot, Alex Rudniy, Perry Deess, Andrew Klobucar, Regina Collins, and Sharla Sava.  The authors make an important contribution to the scholarship about portfolio and ePortfolio assessment.  Portfolios and ePortfolios are able to capture the complexity of student writing, but the scholarship has largely evaded empirical considerations.  This article presents a thorough investigation of ePortfolio assessment on a university campus through the lens of the three most important empirical principles:  reliability, validity, and fairness.  Elliot et al. conduct their case study through the most recent definitions of these concepts articulated in Standards for Educational and Psychological Testing  (2014).  For composition and writing studies scholars, it is important to know that empirical and psychometric concepts are dynamic and constantly undergo revision based on emerging scholarship.  Composition scholars of the late 1980’s and early 1990’s who advocated faculty-led writing assessment and portfolio assessment practices did much to change the ways in which the educational measurement community conceptualized and understood principles for assessment.  Elliot et al.’s piece closes this loop in many ways and show opportunities to understand the assessment of student writing in ePortfolios through empirical lenses, and they also highlight challenges that lie ahead for our fields.

 

As ever, the Journal of Writing Assessment relies on a great team to bring you this excellent scholarship.  We are indebted to the JWA Editorial Team:  Associate Editor, Jessica Nastal-Dema of Prairie State College; Associate Editor, Tialitha Macklin of Sacramento State University, and this year, we welcomed our new editorial assistant, Stacy Wittstock of University of California, Davis.  We could not produce JWA without their hard work, humor, generosity, and intelligence. We would also like to thank the JWA Reading List Editors, David Bedsole of University of Alabama and Bruce Bowles, Jr. of Texas A & M University-Central Texas.  They are shaping and evolving the role that this platform can play in our review of writing assessment scholarship.

 

Additionally, we would like to thank the legions of reviewers who carefully read and responded to many manuscripts over this year.  JWA could not be successful without the generosity of their expertise and time.  We are very grateful for their service.

 

Doug Baldwin, ETS

Beth Buyserie, Washington State University

Sheila Carter-Tod, Virginia Tech

Kathy Charmaz, Sonoma State University

Dylan Dryer, University of Maine

Peter Elbow, UMass Amherst

Norbert Elliot, New Jersey Institute of Technology, University of South Florida

Dana Ferris, University of California, Davis

Jane Fife, Western Kentucky University

Brian French, Washington State University

Chris Gallagher, Northeastern University

Steve Graham, Arizona State University

Roger Graves, University of Alberta

Richard Haswell, Texas A & M, Corpus Christi

Hogan Hayes, Sacramento State University

Asao Inoue, University of Washington, Tacoma

Jeffrey Jablonski, University of Nevada, Las Vegas

Karen Lunsford, University of California, Santa Barbara

Patricia Lynne, Framingham University

Heidi McKee, Miami University of Ohio

Doug McCurry, Australian Council for Educational Research

Dan Melzer, University of California, Davis

Robert Mislevy, ETS

Michael Neal, Florida State University

Les Perleman, MIT

Mary Jo Reiff, University of Kansas

Rich Rice, Texas Tech University

Duane Roen, Arizona State University

Kevin Roozen, University of Central Florida

Tricia Serviss, Santa Clara University

Marlene Schommer-Aikins, Wichita State University

Peter Smagorinsky, University of Georgia

Stephanie Vie, University of Central Florida

Victor Villanueva, Washington State University

Juliet Wahleithner, Fresno State University

Carolyn Wisniewski, University of Illinois Urbana-Champaign

 

Finally, we would like to thank the College of Letters, Arts, and Social Sciences and the English Department at the University of Idaho for their continued financial support of the Journal of Writing Assessment.  This financial support ensures that JWA remains an independent journal that publishes scholarship by and for teachers and scholars of writing assessment.  We would also like to thank the University Writing Program of the University of California, Davis for their financial support of the JWA Reading List.  We’re looking forward to bringing updated content and platform to the JWA Reading List in 2017!



References

 

American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association

 

Danielewicz, J., & Elbow, P. (2009). A unilateral grading contract to improve learning and teaching. College Composition and Communication, 61(2), 244-268.

 

Inoue, A. B. (2005). Community-based assessment pedagogy. Assessing Writing, 9(3), 208-238.

 

Shor, I. (1992). Empowering education: Critical teaching for social change. Chicago, IL: University of Chicago Press.