Select a Past Issue

Volume 7, Issue 1: October 2014

Volume 6, Issue 1: August 2013

Volume 5, Issue 1: January 2012

Volume 4, Issue 1: December 2011

Volume 3, Issue 2: Fall 2007

Volume 3, Issue 1: Spring 2007

Volume 2, Issue 2: Fall 2005

Volume 2, Issue 1: Spring 2005

Volume 1, Issue 2: Fall 2003

Volume 1, Issue 1: Spring 2003


Search Titles, Authors & Abstracts

This tool allows you to search all of the titles, authors and abstracts from every issue, both past and current. Issues from 2007 back are only available as .pdf files, but are still searchable.

KEYWORD OR PHRASE

Volume 7, Issue 1: October 2014

Afterword

by Peggy O'Neill and Diane Kelly-Riley

Review Essay: Paul B. Diederich? Which Paul B. Diederich?

by Rich Haswell

Robert L. Hampel’s 2014 edited collection of pieces by Paul Diederich, most of them unpublished, casts Diederich in a new light. The articles, reports, and memoranda reveal him and his work in writing assessment as deeply progressive, both in the educational and political sense. They call for a re-interpretation of his factoring of reader judgments (1961), his analytical scale for student essays (1966), and his measuring of student growth in writing (1974). The pieces also depict Diederich as an intricate and sometimes conflicted thinker, who always saw school writing performance and measurement in terms of the psychological, social, and ethical. He still has relevance today, especially for writing assessment specialists wrestling with current issues such as the testing slated for the Common Core State Standards.

Linguistic microfeatures to predict L2 writing proficiency: A case study in Automated Writing Evaluation

by Scott A. Crossley, Kristopher Kyle, Laura K. Allen , Liang Guo, & Danielle S. McNamara

This study investigates the potential for linguistic microfeatures related to length, complexity, cohesion, relevance, topic, and rhetorical style to predict L2 writing proficiency. Computational indices were calculated by two automated text analysis tools (Coh-Metrix and the Writing Assessment Tool) and used to predict human essay ratings in a corpus of 480 independent essays written for the TOEFL. A stepwise regression analysis indicated that six linguistic microfeatures explained 60% of the variance in human scores for essays in a test set, providing an exact accuracy of 55% and an adjacent accuracy of 96%. To examine the limitations of the model, a post-hoc analysis was conducted to investigate differences in the scoring outcomes produced by the model and the human raters for essays with score differences of two or greater (N = 20). Essays scored as high by the regression model and low by human raters contained more word types and perfect tense forms compared to essays scored high by humans and low by the regression model. Essays scored high by humans but low by the regression model had greater coherence, syntactic variety, syntactic accuracy, word choices, idiomaticity, vocabulary range, and spelling accuracy as compared to essays scored high by the model but low by humans. Overall, findings from this study provide important information about how linguistic microfeatures can predict L2 essay quality for TOEFL-type exams and about the strengths and weaknesses of automatic essay scoring models.

Language Background and the College Writing Course

by Jonathan Hall

In an era of growing linguistic diversity, assessment of all college writing courses needs to include a focus on multilingual equity: How well does the course serve the needs of students with varying language backgrounds and educational histories? In this study, an Education and Language Background (ELB) survey was developed on a scale measuring divergence from default assumptions of college students as U.S.-educated monolingual English speakers. This survey data was used in assessment of a junior-level college writing course by correlating student ELB data with writing sample scores. On the pre-test, multilingual students and immigrants educated in non-U.S. systems scored significantly lower, but by the post-test this effect had disappeared, suggesting that junior-level writing instruction may be of especial utility to such students. Survey data also revealed important language and education differences between students who began their career at College Y in first-year composition and those who transferred in later. Students’ language background information should be routinely collected at the beginning of each course using an instrument, such as the ELB, that systematically quantifies student language identity using multiple questions, thus permitting both a nuanced portrait of how multilinguality interacts with student writing proficiency, and development of differentiated instruction strategies.

Dynamic Patterns: Emotional Episodes within Teachers’ Response Practices

by Nicole I. Caswell

Responding to student writing is one activity where teachers’ emotions become relevant, but there are limited scholarly conversations directly discussing emotion as a component of teachers’ responses to student writing. This article brings together scholarship on emotion, survey results, and narrative description of two specific teachers to suggest the relationship between emotion and response: a dynamic, recursive episode pattern of values, triggers, emotions, and actions. The results of 146 surveys of writing teachers reporting on emotions in their response practices provide a contextual grounding for a closer examination of the interrelated emotional episode of one teacher, Brittney. An awareness of the emotional episode of response promotes reflection and acts as a catalyst for teachers to think about their teacherly identity.