Volume 5, Issue 1: 2012

Afterword: Volume 5, 2012

by Diane Kelly-Riley and Peggy O'Neill, Editors

Traditionally, editors write an introduction to each issue of a journal, linking articles, identifying themes, and explaining what unifies the volume. However, because our volumes evolve organically with articles being published online as they are deemed ready through the peer-review process, that traditional approach doesn't work. Instead, we have opted for an Afterword, an opportunity to look back, to see what links and themes have emerged in the volume.

Volume Five marked the first full online issue that we compiled as editors. In 2001, Brian Huot and Kathleen Yancey started Journal of Writing Assessment, stating, "we believe that the academics who see the need for a journal, who write, research, and edit the articles should make the decisions about who should edit and retain control [of the journal]" (pp. 1-2). We've remained committed to the independent spirit that inspired Huot and Yancey in starting this journal: One that is by and for scholars, and whose publication agenda remains in the hands of our teacher and scholarly community.

We've also felt that another mission of JWA is to document emerging methodologies and best practices attentive to the writing assessment community, and to begin to highlight models of inquiry that can expand the ways in which our field conducts scholarship. Huot and Yancey explained that JWA would bring a "commitment to publish a wide range of writing assessment scholarship for a wide range of scholars and teachers" (p. 2). We have also remained committed to promoting this diversity of scholarship while also making it available to all those interested in the topic through our decision to publish JWA as an online, open access journal. Our first volume remains true to the founding editors' commitment while also bringing our own perspective to it.

A look back at Volume 5 shows variety and range in the writing assessment scholarship.

The first article links writing centers and student learning. Katherine Schmidt and Joel Alexander take an innovative approach to looking at the effects of writing center tutoring on students' belief in their own writing abilities in "The Empirical Development of an Instrument to Measure Writerly Self-Efficacy in Writing Centers." Schmidt and Alexander acknowledge the complicated situation of articulating outcomes within a writing center setting--a setting that is decidedly process-oriented and has difficulty articulating outcomes for its enterprise. As a result, they examine students' perception of their own self-efficacy related to writing to gauge the impact of writing center tutoring on students' work. Also, this piece describes a new empirical tool--the Post-Secondary Writerly Self-Efficacy Scale (PSWSES). This tool "focuses more on self-efficacy beliefs related to writing than on confidence in writing skills because the focus is, and should remain, on assessing the internal construct and not the skill, which varies uncontrollably from assignment to assignment and from discipline to discipline." Their foray into empirical inquiry in writing center settings embarks on important work--the PSWSES enables similar research to be explored and validated in other writing center settings. They have done important theorizing and initial validation of this scale as a potential research tool in other writing center settings.

In "College Students' Use of a Writing Rubric: Effect on Quality of Writing, Self-Efficacy, and Writing Practices," Amy Colvill looks at the usefulness of rubrics--examining how the use of a rubric effects students' "writing beliefs, practices, and performance." She finds that while the rubrics help articulate language for students and faculty, they don't seem to have any significant effect on students' performances. Likewise, Colvill's project highlights research conducted within a classroom, an important site for writing assessment practice and research. Again, Colvill's use of quasi-experimental design provides a model for others to use and build upon her findings.

Jeff Sommers offers another approach to classroom-based writing assessment inquiry. His article, "Response Rethought... Again: Exploring Recorded Comments and the Teacher-Student Bond" does a comparison study of the benefits of audio comments over written comments on students' writing. Sommers notes that a prevailing metaphor for responding to student writing is the conversation. Through his systematic inquiry into his own audio responses to students, he argues for moving beyond the metaphor to actually carrying on a conversation with students through spoken commentary about their writing is much more effective than written comments. At the core, recorded comments offer the opportunity to extend the classroom instruction and go beyond an assessment of the student's work. Sommers' project examines not only the features of oral commentary, but also documents the different types of comments possible via recorded commentary, and that these oral comments are significant. Sommers employs innovative research methodology in his comparative approach to previous work on audio commentary, but also in a small-scale study he conducts with his own practices. Again, this provides the field with a model to further explore in different settings.

The next article moves beyond a single classroom and a single research site. Chris Anson, Deanna Dannels, Pamela Flash and Amy Housley Gaffney argue for the importance of contextually based rubrics and evaluation methods and assert the "abandonment of generic rubrics." They highlight two models of contextually based evaluation efforts through the Writing Enhanced Curriculum at the University of Minnesota and North Carolina State University's assessment of oral genre of communication. Both approaches argue for the importance of discipline specific evaluation criteria that are attentive to the desired competencies defined within particular disciplinary contexts. Such time intensive efforts are essential because "generic, all-purpose criteria for evaluating writing and oral communication fail to reflect the linguistic, rhetorical, relational and contextual characteristics of specific kinds of writing or speaking that we find in higher education." The authors provide in-depth case studies of their respective institutions and highlight the benefits and outcomes possible through a locally-situated assessment effort--one that is attentive to disciplinary and institutional context. Their work provides solid material for thought as much of literacy education moves toward rubric-based assessment of learning outcomes.

Complementing the research-based essays are two additional pieces: a book review and an annotated bibliography. Tialitha Macklin's review of Reframing Writing Assessment to Improve Teaching and Learning highlights the value of reframing to practitioners in writing assessment, particularly those who are new to "formal dialogues surrounding writing assessment." Macklin's review specifically emphasizes the importance of Reframing for people who are beginning their careers in writing assessment and writing program administration.

The last piece in Volume 5 is "An Annotated Bibliography of Writing Assessment: Machine Scoring and Evaluation of Essay-Length Writing" compiled by Richard Haswell, Whitney Donnelly, Vicki Hester, Peggy O'Neill, and Ellen Schendel. The annotated bibliography provides importance research on the way Automated Essay Scoring works as well as critiques of the machine scoring of student essays. As the assessment of student writing via computers becomes more of a reality through the assessment of Common Core State Standards, it is important that people who work in writing assessment have a solid background of information available regarding this mode of assessing writing.

Looking back over Volume 5, and the two years it took for us to get the journal online and complete the first issue, we are grateful to the many, many colleagues who supported us in this new adventure. A special thanks to all of the authors who submitted their work to JWA. Without them, we would not have been able to complete this volume. Washington State University has supported Diane's effort with time and resources. Jessica Nastal of University of Wisconsin-Milwaukee, our associate editor, is an invaluable part of our success over this first year. Likewise, our appreciation to Rachel Barouch Gilbert, formerly of Washington State University, and Tialitha Macklin, Washington State University, for serving as editorial assistants. Finally, we want to extend our sincere thanks to the following colleagues who served as reviewers for manuscripts for this volume:

Chris Anson, North Carolina State University
Arthur Applebee, University at Albany-SUNY
Bob Broad, Illinois State University
Carolyn Calhoon-Dilahunt, Yakima Valley Community College
Susan Callahan, Northern Illinois University
Norbert Elliot, New Jersey Institute of Technology
Chris Gallagher, Northeastern University
Steve Graham, Arizona State University
Susan Marie Harrington, University of Vermont
Richard Haswell, Texas A&M, Corpus Christi
Robert Land, California State University, Los Angeles
Neal Lerner, Northeastern University
Susan McLeod, University of California, Santa Barbara
Michael Moore, Georgia Southern University
Sandra Murphy, University of California, Davis
Ellen Schendel, Grand Valley State University
Tony Scott, University of North Carolina, Charlotte
Carol Severino, University of Iowa
Tony Silva, Purdue University
Jeff Sommers, West Chester University
Terry Underwood, California State University, Sacramento
David Wallace, University of Central Florida
Edward Wolfe, Pearson
Terry Zawacki, George Mason University