Select a Past Issue

Volume 9, Issue 2: 2016

Volume 8, Issue 1: 2015

Volume 7, Issue 1: 2014

Volume 6, Issue 1: 2013

Volume 5, Issue 1: 2012

Volume 4, Issue 1: 2011

Volume 3, Issue 2: Fall 2007

Volume 3, Issue 1: Spring 2007

Volume 2, Issue 2: Fall 2005

Volume 2, Issue 1: Spring 2005

Volume 1, Issue 2: Fall 2003

Volume 1, Issue 1: Spring 2003


Search Titles, Authors & Abstracts

This tool allows you to search all of the titles, authors and abstracts from every issue, both past and current. Issues from 2007 back are only available as .pdf files, but are still searchable.

KEYWORD OR PHRASE

Volume 9, Issue 2: 2016

Globalizing Plagiarism & Writing Assessment: A Case Study of Turnitin

by Jordan Canzonetta and Vani Kannan, Syracuse University

This article examines the plagiarism detection service Turnitin.com’s recent expansion into international writing assessment technologies. Examining Turnitin’s rhetorics of plagiarism alongside scholarship on plagiarism detection illuminates Turnitin’s efforts to globalize definitions of and approaches to plagiarism. If successful in advancing their positions on plagiarism, Turnitin’s products could be proffered as a global model for writing assessment. The proceedings of a Czech Republic conference partially sponsored by Turnitin demonstrate troubling constructions of the “student plagiarist.” They demonstrate, too, a binary model of west and nonwest that stigmatizes nonwestern institutions and students. These findings support an ongoing attention to the global cultural work of corporate plagiarism detection and assessment.

Recognizing Multiplicity and Audience across the Disciplines: Developing a Questionnaire to Assess Undergraduates’ Rhetorical Writing Beliefs

by Michelle Neely, University of Colorado at Colorado Springs

How do students feel about expressing uncertainty in their academic writing? To what extent do they think about their readers as they compose? Understanding the enactment of rhetorical knowledge is among the goals of many rich qualitative studies about students’ reading and writing processes (e.g. Haas & Flower, 1988; Roozen, 2010). The current study seeks to provide a quantitative assessment of students’ rhetorical beliefs based on a questionnaire. This study reports on (1) the development of the Measure of Rhetorical Beliefs and (2) demonstration of the measure’s construct validity and utility by comparing undergraduates’ rhetorical and epistemological beliefs, as well as their composing process, across different majors. The new Measure of Rhetorical Beliefs (MRB) was administered to engineering, business, and liberal arts and science majors, along with the Inventory of Process in College Composition (Lavelle and Zuercher, 2001) and the Epistemological Belief Inventory (Schraw, Bendixen, and Dunkle, 2002). Findings suggest that rhetorical writing beliefs are a measurable construct distinct from, but related to, epistemological beliefs and composing practices and that students from different majors may have different rhetorical beliefs and composing practices. Implications for use of the Measure of Rhetorical Beliefs are discussed, to include further validation of the instrument and its potential use for research, program evaluation, and instructional practice.

Multimodal Assessment as Disciplinary Sensemaking: Beyond Rubrics to Frameworks

by Ellery Sills, University of Nevada, Reno

This study argues that organizational studies scholar Karl Weick’s concept of sensemaking can help to integrate competing scales of multimodal assessment: the pedagogical attention to the purposes, motivations, and needs of composing students; the programmatic desire for consistent outcomes and expectations; and the disciplinary mandate to communicate collective (though not necessarily consensual) values to composition scholars and practitioners. It addresses an ongoing debate about the prevalence of common or generic rubrics in conducting multimodal assessment; while some scholars argue that multimodal assessment is compatible with common, and even print-oriented, programmatic rubrics, others insist that only assignment-specific, context-driven assessments can account for the rich diversity of multimodal processes and texts. Adopting sensemaking theory, by contrast, argues for multimodal assessment efforts to attend to cross-programmatic and disciplinary frameworks--plastic, scalable assessment categories that can be adapted to local contexts. An analysis of current multimodal assessment research and practice demonstrates how emergent sensemaking frameworks are integrating global (cross-programmatic) and local (classroom- or assignment-specific) scales of assessment. Keywords: sensemaking, multimodal assessment

Contract Grading in a Technical Writing Classroom: A Case Study

by Lisa M. Litterio, Bridgewater State University

The subjectivity of assessing writing has long been an issue for instructors, who carefully craft rubrics and other indicators of assessment while students grapple with understanding what constitutes an "A" and how to meet instructor-generated criteria. Based on student frustration with traditional grading practices, this case study of a 20-student technical writing classroom employed teacher-as-researcher observation and student surveys to examine how students in a technical writing classroom in the Northeast collaborated together to generate criteria relating to the quality of their writing assignments. The study indicates that although students perceive more involvement in the grading process, they resist participation in crafting criteria as a class and prefer traditional grading methods by an “expert,” considering it a normative part of the grading process. The study concludes with implications for integrating contract grading in the technical writing classroom.

Keywords: technical writing, contract grading, assessment, student feedback

ePortfolios: Foundational Measurement Issues

by Norbert Elliot, Alex Rudniy, Perry Deess, Andrew Klobucar, Regina Collins, Sharla Sava

Using performance information obtained for program assessment purposes, this quantitative study reports the relationship of ePortfolio trait and holistic scores to specific academic achievement measures for first-year undergraduate students. Attention is given to three evidential categories: consensus and consistency evidence related to reliability/precision; convergent evidence related to validity; and score difference and predictive evidence related to fairness. Interpretative challenges of ePortfolio-based assessments are identified in terms of consistency, convergent, and predictive evidence. Benefits of these assessments include the absence of statistically significant differences in ePortfolio scores for race/ethnicity sub-groups. Discussion emphasizes the need for principled design and contextual information as prerequisite to score interpretation and use. Instrumental value of the study suggests that next-generation ePortfolio-based research must be alert to sample size, design standards, replication issues, measurement of fairness, and reporting transparency. Keywords: ePortfolios, fairness, program assessment, reliability, validity

Editorial Board 2016