Select a Past Issue

Volume 9, Issue 1: 2016

Volume 8, Issue 1: 2015

Volume 7, Issue 1: 2014

Volume 6, Issue 1: 2013

Volume 5, Issue 1: 2012

Volume 4, Issue 1: 2011

Volume 3, Issue 2: Fall 2007

Volume 3, Issue 1: Spring 2007

Volume 2, Issue 2: Fall 2005

Volume 2, Issue 1: Spring 2005

Volume 1, Issue 2: Fall 2003

Volume 1, Issue 1: Spring 2003


Search Titles, Authors & Abstracts

This tool allows you to search all of the titles, authors and abstracts from every issue, both past and current. Issues from 2007 back are only available as .pdf files, but are still searchable.

KEYWORD OR PHRASE

Volume 2, Issue 1: Spring 2005

Introduction

by Brian Huot

Rhetorical Writing Assessment: The Practice and Theory of Complementarity

by Bob Broad and Michael Boyd

Writing portfolio assessment and communal (shared, dialogical) assessment are two of our field's most creative, courageous, and influential innovations. Because they are also relatively expensive innovations, however, they remain vulnerable to cost-cutting by university administrators and to attacks from testing corporations. This article lays a theoretical foundation for those two powerful and valuable practices in teaching and assessing writing. Building on the concept of "complementarity" as developed in the fields of quantum physics (Bohr; Kafatos & Nadeau) and rhetoric (Bizzell) and adapted for educational evaluation (Guba & Lincoln), we provide some of the "epistemological basis," called for by Huot, on which portfolio and communal assessment are based and by which those practices can be justified. If we must look to science to validate our assessment practices (and perhaps we must), we should not settle for outdated theories of psychometrics that support techniques like multiple-choice testing. Instead, from more recent scientific theorizing we can garner strong support for many of our best practices, including communal and portfolio assessment. By looking to the new science--including the new psychometrics (Cronbach, Moss)--we can strengthen and protect assessment practices that are vibrantly and unapologetically rhetorical.

The Misuse of Writing Assessment for Political Purposes

by Edward M. White

This article focuses on the political dimensions of writing assessment, outlining how various uses of writing assessment have been motivated by political rather than educational, administrative, and professional concerns. Focusing on major purposes for writing assessment, this article examines state-mandated writing assessments for high school students, placement testing for incoming college students, and upper class college writing assessments such as rising junior tests and other exit measures that are supposed to determine whether students can write well enough to be granted a college degree. Each of these assessments represents a gate through which students must pass if they are to gain access to the privileges and enhanced salaries of college graduates, and so they carry a particular social weight along with their academic importance. In other words, each of these tests carry significant consequences or high stakes. According to the most recent and informed articulations of validity, each of the cases examined in this article require increased attention to the decisions being made and the consequences for students, teachers, and educational institutions. In each case, this article addresses the political reasons why these assessments are set in motion and point to the inner contradictions that make it quite impossible for them ever to accomplish their vaguely stated purposes.

Uncovering Rater's Cognitive Processing and Focus Using Think-Aloud Protocols

by Edward M. Wolfe

This article summarizes the findings of a series of studies that attempt to document cognitive differences between raters who rate essays in psychometric, large-scale direct writing assessment settings. The findings from these studies reveal differences in both what information the rater considers as well as how that information is processed. Examining raters according to their ability to agree on identical scores for the same papers, this article demonstrates that raters who exhibit different levels of agreement in a psychometric scoring system approach the decision-making task differently and consider different aspects of the essay when making that decision. The research summarized here is an initial step in understanding the relationship between rater cognition and performance. It is possible that future research will enable us to better understand how these differences in rater cognition come about so that those who administer rating projects will be better equipped to plan, manage, and improve the processes of rater selection, training, and evaluation.

Understanding Student Writing--Understanding Teachers Reading, a review of Lad Tobin: Reading Student Writing: Confessions,Meditations and Rants

by Anthony Edgington

Let me begin with what Brian Huot has called a rather "simple argument": If an instructor wishes to respond to student writing, he or she must read that piece of writing first. I imagine (or, at least, strongly hope) that most readers would agree with this statement. If there is an instructor who has developed a method of response that does not involve reading, I would be curious to hear about the success of such a method.

An Annotated Bibliography of Writing Assessment: Reliability and Validity, Part 2

by Peggy O'Neill, Michael Neal, Ellen Schendel and Brian Huot

In this, our third installment of the bibliography on assessment, we survey the second half of the literature on reliability and validity. The works we annotate focus primarily on the theoretical and technical definitions of reliability and validity--and in particular, on the relationship between the two concepts. We summarize psychometric scholarship that explains, defines, and theorizes reliability and validity in general and within the context of writing assessment. Later installments of the bibliography will focus on specific sorts of assessment practices and occasions, such as portfolios, placement assessments, and program assessment--all practices for which successful implementation depends on an understanding of reliability and validity.