Select a Past Issue

Volume 9, Issue 1: 2016

Volume 8, Issue 1: 2015

Volume 7, Issue 1: 2014

Volume 6, Issue 1: 2013

Volume 5, Issue 1: 2012

Volume 4, Issue 1: 2011

Volume 3, Issue 2: Fall 2007

Volume 3, Issue 1: Spring 2007

Volume 2, Issue 2: Fall 2005

Volume 2, Issue 1: Spring 2005

Volume 1, Issue 2: Fall 2003

Volume 1, Issue 1: Spring 2003


Search Titles, Authors & Abstracts

This tool allows you to search all of the titles, authors and abstracts from every issue, both past and current. Issues from 2007 back are only available as .pdf files, but are still searchable.

KEYWORD OR PHRASE

Volume 1, Issue 2: Fall 2013

Introduction

by Brian Huot

From a Former Editor and New Editorial Board Member

by Kathleen Blake Yancey

During the last ten years, it has been my pleasure and privilege to help found and edit two journals, first Assessing Writing and now, The Journal of Writing Assessment. The experience has taught me more than I anticipated: as an editor, you develop a view of the field that, I think, is simply impossible otherwise. And of course, in the process of editing, I've had the pleasure of meeting and working with many, many smart--and gracious--colleagues. I am in their debt.

Validity of Automated Scoring: Prologue for a Continuing Discussion of Machine Scoring Student Writing

by Michael Williamson

Writing assessment has developed along two separate lines, one centered in professional organizations for writing teachers and the other centered in professional organizations for the broader assessment community. As the controversy about automated scoring continues to develop, it is important for writing teachers and researchers to become fluent in the discourse of the broader assessment community. Continuing to label the work of the broader assessment community as positivist and continuing to ignore it will only result in a continuing sense of defeat as automated assessment is adopted more widely. On the other hand, an examination of the literature on educational assessment will reveal that the theoretical base for assessment is quite consistent with the principles adopted by the writing assessment community.

The Politics of High-Stakes Writing Assessment in Massachusetts: Why Inventing a Better Assessment Model is Not Enough

by Dan Fraizer

What happens when government officials conspire with a national testing company to control literacy standards for teacher preparation students on a statewide level? This essay documents the politics of the Massachusetts teacher test story, focusing on the flawed process that led to a writing test that excluded the participation and negotiation of stakeholders. I argue that as a discipline, we need to learn to play politics better, faster, and with a strong disciplinary commitment to promoting assessment models that are fairly negotiated. Writing professionals should organize in order to participate directly in good faith discussions with powerful interests so as to promote locally developed and decentralized assessment models.

Assessing Academic Discourse: Levels of Competence in Handling Knowledge From Sources

by Sarah Hauptman, Melodie Rosenfeld, and Rivka Tamir

Knowing how to handle knowledge from sources, the foundation of engaging in academic discourse, is a complex task that can cause college writers great difficulties. After investigating the literature, the authors found that the primary cause of these difficulties is students' lack of knowledge about what exactly is expected of them, namely, the lack of task representation. In order to clarify the task representation of dealing with sources, the authors isolated the criteria and then translated them into a rubric. The rubric focused on the two areas of transformation of knowledge from single sources and integration of knowledge from multiple sources. Each area was divided into levels of competence. Fifteen college research reports were evaluated with the rubric. As expected, no student reached the highest, accomplished level of competence in handling of knowledge from sources, with integration being the most challenging area. Nevertheless, based on anecdotal evidence from students, the rubric could be useful as a practical tool to clarify an important part of academic discourse for college writers and their instructors.

Portfolios Across the Centuries, a review of Liz Hamp-Lyons and William Condon: Assessing the Portfolio

by Terry Underwood

An examination of the status and uses of writing portfolios in university writing programs at the close of the 20th century, Assessing the Portfolio was written out of the firsthand experiences of two writing program administrators (WPAs) who worked together in the mid-1980s at the University of Michigan just at the time that Belanoff and Elbow (1986) published their germinal piece on the demise of timed writing tests and the birth of university writing portfolios as exit measures at Stony Brook, ushering in a period of profound interest and attention to portfolios in the writing classroom. Pat Belanoff and Marcia Dickson's 1991 anthology Portfolios: Process and Product and Kathleen Blake Yancey's (1992) Portfolios in the Writing Classroom began a half decade or so of conferences and publications that has helped establish the portfolio as a mainstay in the writing classroom and as a viable option for large-scale assessment as well.

An Annotated Bibliography of Writing Assessment

by Peggy O'Neill, Michael Neal, Ellen Schendel and Brian Huot

In this, our second installment of the bibliography on assessment, we survey the literature on reliability and validity, the first of a two-part series that will continue in the next issue of JWA. The works we annotate focus primarily on the theoretical and technical definitions of reliability and validity--and in particular, on the relationship between the two concepts. We summarize psychometric scholarship that explains, defines, and theorizes reliability and validity in general and within the context of writing assessment. Later installments of the bibliography will focus on specific sorts of assessment practices and occasions, such as portfolios, placement assessments, and program assessment--all practices for which successful implementation depends on an understanding of reliability and validity.