Volume 11, Issue 1: 2018

Comments on Student Papers: Student Perspectives

by by Darsie Bowden, DePaul University

This paper reports on the results of a research project that examines how students responded to instructors’ comments on academic papers written for a first-year writing course at a large, Midwestern university. Data collected consisted of rough drafts with instructor comments, final drafts of the same papers, and two sets of interviews, one after students had received the teacher’s comments, and one after they had revised the final draft. In the interest of contributing to our understanding of what response to student writing does, this study explores student reflections on what they think, feel, and do in response to instructor comments. Findings suggest good reasons for the lack of one-to-one correspondence between an instructor’s comments and an improved final draft, and that we may need to look at factors other than revised drafts for evidence of student learning. Keywords: teaching writing, responding, instructor comments, writing process, learning


As most theorists and practitioners in the field of teaching writing well know, there is a considerable body of scholarship on responding to student work, resulting in extensive resources (e.g., Straub, 1999) that have informed our approaches to response and are frequently used in teacher training and faculty development   And yet, given the staggering amount of time and effort that instructors spend responding to student work—that is, writing comments on student papers—we continually struggle to determine the effectiveness of those comments (e.g., Anson, 2012). Too often, it seems that, even when we provide students with the types of comments endorsed by experts, students make disappointingly few changes in the final draft. And, even when instructors recognize concrete changes in student drafts, these changes frequently seem to be inadequate responses to the feedback provided (only partially resolving the problems or focused entirely on minor corrections while major problems go unaddressed). To encourage significant revision, some instructors have resorted to requiring that the final draft differ substantially from the rough draft, grading the final draft not just on quality but on quantity of changes as well.

Extensive study on the writing process, stretching from Emig’s (1968) Composing Process of 12th Graders, to more recent work by compositions theorists Haswell (2006), Prior and Shipka (2003), Prior (2016), N. Sommers (2000, 2005, 2006, 2012), Straub (1999), and Spinuzzi (2008) have amply demonstrated the multiple forces that inhere in the act of composing. Writing (and therefore learning to write), we know, is complex, often unpredictable, and multi-faceted. Haswell noted that, in the face of this complexity, we develop rubrics and checklists, symbols, and computer feedback software to try to make responding easier and more effective. He also points that, in fact, compositionists have a long history of trying to find ways to manage the paper load (See pp. 10-14). But, we have a conundrum: We have developed rubrics and “best practices” for commenting on student writing, we have worked to nest our commenting in classroom activities and contexts, we try to individuate instruction through our comments, but it is not entirely clear if these strategies work or how they work. Prior (1995) and Prior and Shipka (2003) have argued that the trajectories from instruction to improvement are, in fact, not linear, making attempts to chronicle the effectiveness of comments difficult—both in determining what effectiveness means and what the impact on students is.  

Further, Anson (2012), Berzsenyi (2001), Fife and O’Neill (2001), N. Sommers (2000), and others have pointed out that much of the scholarship on response has focused on the teachers’ perspective rather than that of their students. Some compositionists advocate trying to open a dialogue about responses with students (Dohrer, 1991; J. Sommers, 2012) to gauge students’ understanding of comments or to facilitate a more productive collaboration in resolving writing challenges. While this can be valuable in helping students and instructors clarify feedback and responses, it may not adequately capture what happens as students consider what is to be done to improve their writing.

The research project described in this paper was designed to address these concerns and my own frustrations in more than 30 years of responding to my students’ work in their first-year writing (FYW) courses and in training FYW instructors. How do I explain to other instructors what happens when students receive our comments? How do the comments work?

The study, then, addresses the following research questions: How do students understand and react to instructor comments? What influences students’ process of moving from teacher comment to paper revision? What comments do students ignore and why? The data are culled from the perspectives of a set of FYW students—specifically what students shared with us about their thinking, ideas, and goals between their receipt of a draft with comments and the submission of their final draft.

Literature Review

The work done on students’ perceptions of paper comments, while limited, has not been insignificant. Since her influential article in 1982 (“Responding to Student Writing”), N. Sommers (2006) has, in a longitudinal study, looked at the ways that Harvard students react to comments. She also produced two videos, Across the Drafts (N. Sommers, 2005) and Beyond the Red Ink: Teachers’ Comments through Students’ Eyes (N. Sommers, 2012), the latter featuring interviews with Bunker Hill Community College students talking about instructor comments. Students explained that they liked comments that were positive, that started a conversation rather than just told them what to do, and that shared models of appropriate responses to writing problems. N. Sommers (2012) added in the video's “User’s Guide to Beyond the Red Ink” that it’s important for instructors to discuss the purpose of comments, to work on global issues (but know when to go local) and to “anchor comments in the student’s writing” (pp. 5-7) and in what is going on in the classroom.

Studies of what students think about and do with comments have garnered considerable attention in a range of subfields of composition. Christiansen and Bloch (2016), Ferris (1995, 2003, 2006), and Hyland and Hyland (2006), have produced studies that looked at how second-language (L2) writers responded to teacher comments. Ferris (2006) reported that explicit feedback on usage for L2 writers (e.g., corrections on the paper) resulted in consistent and accurate improvements in the final draft, yet indirect feedback (a notation that there is an error in a sentence but leaving it up to the student to find and correct it) was more conducive to a student’s ability to catch and correct errors over time. Calhoon-Dillahunt and Forrest (2013), Dohrer (1991), Scrocco (2012), Straub (2000), Wingard and Geosits (2014), and Ziv (1984) have explored what students from specific academic communities (FYW, writing across the curriculum [WAC] classes, and community college) told them about how they responded to comments on their work. For instance, Calhoon-Dillahunt and Forrest (2013) examined how students at two-year colleges responded to teacher comments, concluding, among other things, that developmental students may need directive comments, but they actually valued substantive comments that “help[ed] them improve as thinkers and writers even though they struggled to apply that feedback” (p. 242). Scrocco’s 2012 study of four students from a community college attempted to measure relationships between teacher comments and student responses to draw conclusions about student voice, specifically that open-ended, “cooperative” suggestions from the instructor were preferable to critically directive, “closed” comments that tended to stifle student voices. And, Wingard and Geosits’s 2014 study of the impact of comments on students in WAC courses at a small liberal arts college suggested that, despite the disparity of instructor comments, substantive comments tended to produce more substantive revisions.

While studies like these provide valuable information on the student’s perspective, the emphasis is often limited to how students respond to written comments as those responses manifest themselves in changes in subsequent drafts. As Fife and O’Neill (2001) warned, much of the research still may be overly geared toward helping instructors improve upon their approaches to commenting, assuming that good comments can and do result in improved drafts. Fife and O’Neill pointed out, in their own study of two instructors and 10 of their students, that the process is not linear and looking for a one-to-one correspondence between a type of comment and a type of revision may be misguided. Many researchers (Auten, 1991; Christiansen & Bloch, 2016; Prior, 1995; Sperling & Freedman, 1987) have explored the impact of overlapping contexts within which comments are read and acted upon—through studies that examine the interactions between an undergraduate student and an instructor (Sperling & Freedman, 1987), a graduate student and an instructor (Prior, 1995), or through surveys of larger numbers of FYW students (Straub, 2000).


Local Context

The target population consisted of students in FYW classes at DePaul University, a large, four-year, private, Catholic university. The total university enrollment in 2014-2015 (the year of data collection) was around 23,800: 16,100 undergraduate students and 6,800 graduate students. Fall quarter 2014 had a first-year student enrollment of 2,400 students. Around 1,500 of these students took the FYW sequence (honors students or students who transferred in credits from other institutions were exempt from the requirement). The average class size of FYW classes in the program is 20 students (capped at 23).

The FYW courses at DePaul use the Writing Program Administration (WPA) Learning Outcomes (the version approved in 2000; see Appendix A) to drive curriculum and to help with program assessment and faculty development. In other words, coursework is designed to address rhetorical knowledge, critical reading and writing, composing processes, and knowledge of conventions. The culminating project for all students is a digital portfolio that features a reflective component—a reflective essay or reflective annotations—which invites students to comment on how the evidence in the portfolio meets the learning outcomes for the course sequence.

In terms of staffing, at least 90% of the writing faculty who teach in the FYW program at the institution are adjunct; for this project, four of the 13 instructors were full-time, non-tenure track, and nine were part-time. I recruited instructors who were teaching WRD (Writing, Rhetoric & Discourse) 103—the first course in the two-course sequence of FYW at DePaul, selecting instructors who had been teaching at DePaul for more than two years and had regularly attended biannual faculty development workshops that focused, among other things, on preferred methods of responding to student work.

Commenting in different modalities (hand-written comments, screen captures, video, and audio-taped responses) most likely results in different consequences for students (Anson, 1999, 2014; Cope, Kalantzis, McCarthy, Vojak, & Kline, 2011; Kim, 2004; Silva, 2012; N. Sommers, 2000). But, for this study, instructors who were selected commented digitally on student papers using Microsoft Word comment and track-change functions (or similar) and did so regularly in their courses. The goal was to maintain some consistency in how the comments appeared to students and to eliminate challenges, for both students and researchers, associated with trying to read hand-written or account for affordances of different modalities of response.

Out of 50 instructors in our program, 15 instructors were willing to invite researchers into their classrooms to recruit students. Of the 13 instructors selected, seven had graduate training in composition and rhetoric. Four held Masters’ Degrees in our own department of WRD with a focus on writing pedagogy. Thus, all faculty participants had training and familiarity with the scholarship in the field and were encouraged to make use of similar pedagogical approaches in their courses: reading analysis, class discussion, peer review, and multiple drafts. All instructors used the program’s version of the WPA Outcomes Statement in student writing assessment.

Participants and Data Collection

Members of the research team (graduate teaching assistants from our department of composition and rhetoric) and I visited the instructors’ classrooms and—in the absence of the instructor—recruited student participants. Recruitment took place after the third week of classes to ensure that students were relatively familiar with the expectations and rhythms of the course. Most students were working on their second formal essay of the term. Students were offered $25 bookstore gift cards in exchange for submitting rough drafts with comments and final drafts for an upcoming assignment and participating in two 30-minute interviews. In some sections, we had up to seven student volunteers; in others, we had two or three. Ultimately, out of the 54 students who volunteered, 47 from 13 different sections successfully completed the data collection cycle. The demographic breakdown (diversity, age, commuter, transfer) was consistent with the first-year student enrollment at this university, representing a fairly broad spectrum of social and cultural diversity and a range of academic abilities. The study included students who are multilingual; one student who disclosed learning disabilities; and several students who had transferred from other institutions (see Appendix B for demographics). There were, however, significantly more female subjects (74%) than male subjects (24%). (The actual count for the freshman class in 2014-2015 was 58% female/42% male.)

While assignments for students varied across these sections, all were relatively short—between three and five pages—and involved argument and/or analysis. Assignments included position papers on campus issues, the value of education, and the impact of social networking; reviews of songs or films; and rhetorical analyses of advertisements or of a selected text. There were no personal narratives and no assignments that required extensive library research (the latter happens in the second course of the sequence). Also, even when assignments focused on specific tasks—often framed in terms of a genre—students were generally allowed latitude in the direction they could take within specified genres. Most assignments enabled students to pick a topic about which they were knowledgeable.

Interview Process

Students participated in the first interview within a few days of receiving papers with comments from their instructors. Students emailed copies of these drafts to their interviewer before their interview. In the first round of interviews, students were also asked to share their understanding of each comment in the papers and explain what they planned to do in response to the comment (e.g., think about, edit, revise, ignore). When students had completed revising these papers and submitted their final drafts, they completed follow-up interviews (with rough and final draft in hand) in which they were asked to describe what they did in response to each comment, and why, and discuss what influenced how they revised. (See Appendix C for questions from Interviews 1 and 2.) Interviews and interview transcriptions were completed by the graduate assistants and writing center tutors. All transcripts were double-checked for accuracy by the principal investigator. Students also completed a survey in which they provided demographic data, information on their reading and writing backgrounds, and self-confidence about writing.

Theoretical Approach

The project relied on grounded theory methodologies to explore how (and to what degree) instructor comments impacted students, particularly in view of their moves to the final draft. Instead of beginning with a hypothesis to be proved, grounded theory methodology, developed by sociologists Barney Glaser and Anselm Strauss (1967), begins by raising generative questions. Then, in the course of observation (in this case, of student work, teacher comments, and student interviews), researchers formulate theories and hypotheses, which are then verified through analysis (coding, note-taking, and diagramming). I had numerous hypotheses about how students might respond to particular types of comments, largely based on existing scholarship on response; however, I wanted to try, at least initially, to avoid prejudgments and reliance on existing assumptions and to be open—as much as possible—to what students told us. Thus, grounded theory seemed to be the best methodological choice.


When the interviews were completed, all instructor and student identifiers were removed and each set (rough draft, final draft, Interview 1, and Interview 2) received a number. Then, the research team read through all the transcripts, and each researcher provided detailed notes on patterns, themes, and features for coding. At least two researchers read each interview.

After identifying emerging patterns, we reviewed coding methods for comment types and student responses from other studies (particularly Calhoon-Dillahunt & Forrest, 2013; Scrocco, 2012; Wingard & Geosits, 2014) whereupon we selected and adapted codes that seemed to capture what we saw in the interviews. Both instructor comments and student perceptions (what students shared in the interviews about their reactions) were coded using NVivo software, enabling adjustments in the codes as necessary to reflect our discussions and to easily store and access data.

We worked to establish inter-rater reliability through a process of collaborative coding. For early rounds, two different coders worked independently, then met in person to share and compare their coding, discussing where they agreed or disagreed and why. Eventually, we found it less time-consuming to code synchronously in pairs with each coder working independently on a section of the student interview, comparing codes, and working toward consensus. For example, distinguishing between instructor comments that were “meaning-changing” and those that dealt with lower-order concerns (having to do with mechanics) occasionally required discussion about what could be inferred as “meaning-changing.” This was sometimes influenced by what students told us about the context. One student, for instance, felt that the addition of a semi-colon suggested by the instructor —normally considered a lower-order concern—changed the meaning of the sentence; coders responded accordingly. Material that didn’t seem to fit the coding schemes was either noted and put aside or a new code was added. When there wasn’t consensus, a third reader was brought in, but this happened only rarely given the collaborative nature of the coding by the experienced writing tutors/instructors who made up the team.

Codes for comments. Under the assumption that location and type of comment could impact type of response, we distinguished between comment types on the drafts, with instructor comments coded as follows:

  • in-draft: changes made by an instructor in the student’s text itself (e.g., deleting a word, adding a word or phrase, adding or deleting punctuation, re-spelling). These appeared as track changes and occasionally were highlighted in colors.
  • marginal: comments made in the margins using Word’s comment function
  • end: comments, usually longer than marginal comments, that appeared at the end of the essay. Almost all instructors made use of end comments.

Instructor comments were then coded in two additional categories: by whether they asked for or referred to surface-level changes (or involved “lower-order” concerns such punctuation, spelling, typos, cross-outs, usage, word choice, font, formatting of heading or of quotations) or by whether they were substance-level commentsany comment related to the meaning or message of the text. Because substance-level comments were by far more prevalent, we developed sub-codes to categorize types of substance comments (see Appendix D).

Codes for interviews: How students responded. Codes developed for Interview 1 (which took place soon after students had received the comments) and Interview 2 (after students completed the revisions on that paper) were designed to capture what students thought or felt as they talked to us about each comment before they had an opportunity to revise and what they planned to do (Interview 1) and then, in Interview 2, what they told us they actually did in revising. Types of student responses that appeared most frequently are listed in Appendix E.

Both interviews were coded using the same code book, with one exception. For Interview 2, we developed the following subcodes to capture how the student conceived of revisions (or lack thereof). The sub-codes were independent changes (students made changes independent of teacher’s comments), no change (students decided not to make changes; this could include cases where a student decided not to make a change after receiving praise), students’ language and ideas (students used their own language or ideas when making a change but still based it on a teacher’s comment), and teacher’s language and ideas (students decided to use the specific language/ideas suggested by the teacher—for example, verbatim use of the teacher’s wording).


Types of Instructor Comments

In draft comments. While just under half of the rough drafts (22) received in-draft mark-ups on lower-order concerns (rewriting or correcting student prose—primarily punctuation or spelling changes), 25 drafts (53%) received no in-draft comments or markings at all. The paper with the most in-draft changes received 48 corrections, covering a range of mechanical issues (wording, syntax suggestion, punctuation).

Marginal and end comments. There were 455 marginal comments in the 47 rough drafts. Almost all instructors provided some kind of end comment, which summed up what the instructor thought needed to be done. Two students received a completed rubric in lieu of or in addition to a summative narrative.

Of course, instructors commented differently. Some instructors commented more frequently than others (more marginal comments); some marginal comments and end comments were short, and others were several paragraphs long (marginal comments, too), covering a variety of different issues. Substance comments, however, were much more common than surface-level comments, appearing in all drafts. Because substance comments seemed so central to the commenting style of instructors in this study, we broke these down further (see Table 1). In addition to providing directions, the types of comments that appeared most often were comments that explained (usually accompanying other types of comments), praised or were positive in nature, asked questions, offered possible solutions to writing problems, established personal rapport, were critical (could be construed as negative), indicated a grade or score, or suggested edits of substance (meaning-changing).

Table One


Student Reactions to and Understanding of Comments

The following section of results, which constitute the heart of the study, is organized by the order of the research questions: What students thought about after receiving the comments and then what influenced their composing (in other words, what impacted how or if they revised). Because what students chose to ignore or address (final research question) seemed dependent upon their reactions, understanding, and influences, these results are incorporated into the first two categories.

Most students in this study read instructors’ comments and took them seriously. They seemed to appreciate the comments; they tried to understand them; and they worked to figure out what, if anything, to do in response.[1] They shared with interviewers how solutions might relate to activities in class, to readings, to peer review (for both feedback and seeing how other students handled the assignment), and to them academically and sometimes personally (though the academic and personal were not always discrete). A majority of students (60%) indicated that they liked the assignments or found them interesting.

Students spoke frequently of the preponderance of comments that explained the instructor’s reasoning behind a comment, were directive (“Be sure you provide an example of this”), praised, asked genuine questions, suggested possible strategies or revisions without explicitly requiring their implementation, or were personal in nature (responding more like a reader than a teacher) (see Table 1). Some mentioned how starkly different this commenting approach was from teacher approaches in high school.

To illustrate how generative comments were for students, I include the following example, excerpted from an interview with one of the transfer students, who explained his thinking after receiving a positive comment (including an explanation). Instructor Comment: “Yes, I’m glad you bring up Holmes. Your observations reminded me of her article.”

STUDENT: [That comment] makes me feel like this part is fine…but I guess the fact that she’s so emphatic about what I’ve written so far clearly lines up with the Holmes article, it makes me feel like there’s a comparison. She’s very happy that I mention this, because the path that the reasoning and article is going down is more in line with this than what I had previously written about. It might be just what she wrote, it might be feelings that I had as I was reading it a week after I had turned it in…

INTERVIEWER: Your plans for revision based on this comment?

STUDENT: Probably changing, in some way incorporating this idea into the intro. I’m not sure if I want to entirely remove the entire quote that I had in the beginning, but in some way make it so that it brings it back around, instead of just going off on this tangent, which I have a tendency to do.

INTERVIEWER: How do you know you have a tendency to go off on tangents?

STUDENT: I just have a very--I don’t know if it’s an active imagination or just I’m easily distracted. I find a lot of things interesting that I guess a lot of either don’t pick up on or find interesting, so my mind wanders down that path, when I’m supposed to be going down this path. (DB2-7_INT 1)

The student works to interpret why the instructor made the comment and what ramifications it might have for his paper as a whole, the student thinks about what he might do in response (adding, deleting, incorporating), and the student relates the comment to what he already knows about his personal writing challenges.

Despite the field’s controversy over directive, “controlling” comments, most students told us they liked being told exactly what to do, feeling they were “helpful.” Few students indicated that they felt overwhelmed by corrections, even the student mentioned above who received 48 corrections on his draft. One student did say that she was aghast when she saw the number of comments (this included in-draft and marginal)—that is, until she started reading them.

Students did more highly value substance comments, albeit for reasons that varied from student to student. Some students felt that the more detailed comments invited them to be part of the college community; others explained that comments framed as a query or suggestion to consider alternative positions opened up a genuine conversation with the instructor, not just about improving writing (e.g., shifting tone to better appeal to the target reader) but also about rethinking ideas (e.g., considering alternative positions or points of view). The following is an example of a student’s remarks on a comment that poses a question:

STUDENT: (reads instructor’s comment) “Okay. Are you arguing in agreement or disagreement?” See that’s the thing! I like the way he’s put the phrases. It’s like he’s talking to me. So it’s like as if he’s right in front of me, and he’s explaining it to me, so I can read it in his voice. You know? That makes it much more easy. I hope that all teachers do this… Okay…. [pause to read] That is fine, yeah, because this paragraph is also too short, you know? I don’t think he has written about that, but that’s fine, I feel that it’s too short, so I should expand. (DB2-12_INT 1)

He realizes, in thinking through the comment, that one of the solutions is to develop his ideas in the paragraph in question—which is not explicitly indicated in the comment itself.

In general, positive comments indicated to students that instructors valued their ideas or at least affirmed what they were trying to do, and comments about substance suggested to students that the instructor genuinely wanted to help students improve their writing and their thinking about their ideas. Others felt that instructors came off more as real people when they offered comments as suggestions rather than commands.

Managing confusion. One of students’ most frequently mentioned reactions had to do with confusion. I focus considerable attention on this not only because of frequency but because it is often a concern that instructors have. The results illustrate the inconsistency and unpredictability of response.

In the first interview where students talked about the comments and what they planned to do, and the second interview where students explained what they did or didn’t do to revise, there were 62 mentions of confusion. Out of the 47 students in the study, 35 students (74%) of students shared that they were confused about one or more comments.

Students were confused about a broad spectrum of issues: instructor highlighting (e.g., What did the highlighting mean? And, why did the color vary?); track changes (e.g., Which text is the teacher’s? What did he want me to do with this?); the goal of the paper (e.g., Is this what is meant by a rhetorical analysis?); why MLA format? (e.g., The requirements were different in high school); positive comments (e.g., Were students supposed to do something in response?); the grade or score (e.g., How was this figured? Is a check mark with a “plus” attached equivalent to an A?); use of a specific term (e.g., What does “organizing principle” mean? What is “syntax”?); what part of the paper the comment referenced; and, finally, what symbols or abbreviations meant.

Perhaps more interesting, though, was the range of strategies students had for dealing with the confusion. In 17 instances of confusion (27% of the total), students chose to ignore the comment completely, as in the following example:

STUDENT: [reads comment] “Awkward and unclear syntax. You must revise these sentences to make clearly your point.”

INTERVIEWER: Okay, and so do you understand what this comment means?

STUDENT: Not really.

INTERVIEWER: Okay. Why--what don’t you understand about it?

STUDENT: Um, “revise these sentences to clearly make your point.” I don’t really know how to revise that sentence… And I don’t know if I should revise that sentence.

INTERVIEWER: Okay, are you familiar with that word “syntax?”

STUDENT: (whispers) Not really.

INTERVIEWER: Not really?

STUDENT: Yeah, I didn’t change it. (DB2-51_INT 2)

In 11 cases (23%), students tried to do exactly what the instructor told them to do without really understanding what they were doing and why. And in two cases, they “corrected” incorrectly. Eight students sidestepped the comment by omitting the offending word or passage or by taking a different direction, primarily to avoid what they didn’t understand:

INTERVIEWER: [reading instructor comment]: “Could possibly?” What is he trying to communicate here?

STUDENT: (rereads). I don’t know. I’d have to ask him about it.

INTERVIEWER: Do you think that you will?

STUDENT: Most likely not.

INTERVIEWER: So what will you do with the comment then?

STUDENT: I’d rephrase the sentence.

INTERVIEWER: And just hope that in rephrasing it that….

STUDENT: I’m doing it right, yeah. (DB2-3_INT 2)

On the other hand, students worked through the confusion in 19 cases. Some students thought about it and said they figured out the problem on their own; seven students talked to the instructor for a resolution. The following student synthesizes feedback from various sources (peers, instructor comment) and takes the initiative to contact the instructor directly to problem-solve.

STUDENT: During my peer review I know that several of the students had said that I needed to shorten my thesis or condense it and she [instructor] said the same thing so I definitely know that was something I needed to work on. And I actually emailed her and asked if she could further expand on the comment and help me, and she did, so that was really great.(DB2-61_INT 2)

Three other students wrestled with the confusing comment and decided that they disagreed with it (hence, no revision). And one student revised, despite disagreeing initially, because she found the solution to work better than the original. As she explains, “I didn’t really see the need to put [her reasoning] in but, I mean, it works better if you have an explanation after.”

Influences on students’ writing process. Anson (2012), Nystrand (1986), Prior (1995), and Prior and Shipka (2003) have pointed out that what students actually do in revising is mitigated by an array of circumstances, events, and forces—some cultural, some historical, others based on past experiences, present circumstances (especially but not limited to dialogue with the instructor), and students’ perceptions of what’s needed—and what’s possible—in the immediate future. In looking at the factors that influenced student revisions, we wanted to try to distinguish between what we might expect from students taking a writing class and what influences came from elsewhere—or, in other words, experiences not directly related to the class: other contexts, other people, time management, knowledge acquired in other classes.

In-class influences (Table 2). Unsurprisingly, students most often mentioned the benefit of in-class instruction, explanations, and lessons in both the first and second interviews as influencing what they planned to revise and what they actually did revise. Some percentages rose significantly in Interview 2—additional student interactions with instructors (face-to-face or via email), mentions of peer review, and references to course-related reading (students reread assignments, for example)—as students worried about the final drafts. Students seemed to report that they were more attentive to the course set-up and resources in Interview 2. While students mention the writing center in both interviews, no student in the study visited the writing center for this paper.

Table Two

Out of class influences (Table 3). The goal in asking students for influences external to the class structure was to capture some of what happens beyond the classroom experiences or, in other words, at least some of what students brought to how they processed instructor comments. In Interview 1, students referred frequently to high school experiences, the writing process they already used before taking WRD 103, and to knowledge brought in from other college courses. Most of these comments were assertions about how different the approach to commenting was in high school where teachers largely focused on grammar and usage issues and where pedagogies generally resulted in a “one-and-done” drafting process. About 40% of students also referred to their “personal” writing process, which may or may not have been acquired in high school (one student mentioned an invention exercise he learned in elementary school).

Table Three

In Interview 2, students again mentioned high school and their writing process, but influences from other people (outside school) became more salient. Forty percent of students mentioned having a person or persons who were not in the class read their drafts. Several students had parents read drafts; some students reported that a parent simply rubber-stamped the draft as “great,” while other parents—often but not always teachers themselves—were diligent and sometimes merciless.

INTERVIEW: Um, in terms of, like, did any, did you have, like, anyone else read this paper? Like a roommate, or a friend?

STUDENT: Um, yeah. I have my mom, um, reread it, all my papers, actually. Because, um...she was a high school teacher. Well, she did, um, she taught for like 30 years...special ed. And she, uh, she’s retired now...but she loves English and all that. So, yeah, she always has read and proofread.

INTERVIEWER: And so, when you get feedback from your mom, what kind of feedback does she give you, typically?

STUDENT: Yeah, she helps a lot. Like, she’ll just be like, ya know, she’ll edit it and be like “look at this.” She’ll pretty much do the same things [the instructor] does. Underline stuff and like...ya know, take out stuff, and just be like, “look at this sentence,” or “revise,” or “rephrase,” or. (DB2-53_INT 2)

Another student used his father, an accountant, for both grammar corrections and for affirmation.

INTERVIEWER: How do you feel about the final draft?

STUDENT: I’m really proud of it; I really like it. I emailed it to my dad and he said it was really good so…he did some grammar stuff, but he said that I really “hammered it home” with my explanations and quotes so… it’s such a dad thing to say. (DB2-35_INT 2)

Roommates and college friends at other universities served a range of functions. One student reported getting help with a pre-draft analysis from a student and friend at Purdue University. Other times roommates were used as sounding boards for completed drafts:

STUDENT: I read the Barack Obama paragraph to my roommate like 150 times.

INTERVIEWER: … So did your roommate give you suggestions, or...

STUDENT: Yeah, I just… I asked her specifically to tell me if she got what I was saying, like to summarize what I said in her own words.

INTERVIEWER: Did your professor suggest that or is that a strategy you knew?

STUDENT: It was just something I thought of cause we do help each other a lot…she was unclear about some stuff, so yeah I had to continue to make changes so it would be really easy for her to immediately be able to repeat it back to me. (DB2-14_INT 2)

Finally, time management and stress were factors, and, on the quarter system where courses last 10 weeks, time management is often an issue. Thirty-six percent of students in our study mentioned problems with stress and time management, which prevented them from devoting what they considered to be adequate attention to their writing and to revision.

Determining what the teacher wants. One of the most frequently mentioned concerns (which could be categorized as both “in-class” and “outside-class” since it involves both class concerns and external pressures) involved deciding what would most please the instructor. In both interviews, students were significantly preoccupied with grades. They frequently mentioned the grade or score they expected/hoped to receive (or in some cases did receive) or concerns that they would get “points taken off” for doing this or that or not doing this or that. Sixty-one percent of students mentioned grades or scores in Interview 1. In Interview 2, it was slightly less (51%), possibly because students were feeling more confident about their drafts or because students who had already received grades on their rough drafts assumed any revision would improve their grades. Several students explained to us that instructors who gave a grade or a score on the first draft de-emphasized the grade—it was intended to serve as a place-holder, to show them “where they stood.”

Grades or the prospect of grades clearly influenced how many students approached revision. For example, several students commented that they routinely made surface-level corrections regardless of whether they agreed or understood, because this was what the teacher wanted, and the teacher is the one who gives the grades.

STUDENT: …I find myself to be a bit of a suck-up when it comes to teachers.


STUDENT: Because at the end of the day, they’re grading my paper. So, if they say that this needs to be done, I’m going to try to do it, because if I can’t accommodate them, I’m not going to get as good of a grade as I could. (DB2-22_INT 1)

There were other permutations. A student who received a “B” on his rough draft indicated that he did not plan to do any revision. He was a theatre major, and he explained his other courses were very time-consuming, stressful, and more crucial to his career. He just didn’t have the time to revise even though he knew he should; a “B” was just fine with him. Another student didn’t like revising. She knew it would make her paper better and so she did it for the teacher, but she believed that after this course, she would never do it again.

Some students thought making revisions and edits, particularly those that required additional writing, was risky. The following student seemed to believe a comprehensive revision might end up as “bad” as the original (or worse).

INTERVIEWER: So, it would've taken a lot more work--is it that you didn't have time? Or was a B good enough for you? …

STUDENT: I felt like a B was good enough for me and then also just because of my writing style I knew that if I were to rewrite it, I'd probably get off topic again and I felt like I'd be making the same mistakes and then I'd have to revise again. I felt like for time sake, and for my sake, it's just like I don't want to retake the same route I know I'm going to mess up on, I'd rather just look back at some of the errors and sort of like patch them up, fix this and that, and then make them look good other than throwing the entire thing out that was already fairly decent that I could fix up and it'd be fine than write an entirely new thing. (DB2-57_INT 2)

We also looked at the degree to which students said they made changes in their final drafts on the basis of what they perceived the teacher wanted. This is most likely a balancing act, for even if students’ focus is improving their writing, they also have to pay attention to what the grade-giver wants. Thus, we sought to capture student perceptions of what a teacher was looking for. In Interview 1 and 2, we coded for “what teacher wants” and then in Interview 2, we followed up on how students handled this concern in their revision by including a sub-code in Approach to Change, that included using the “teacher’s language and ideas.”

While some students referred to what the teacher wanted, students didn’t always make revisions based on this awareness. Instead they grappled with the extent to which they could exert their own authority in the revisions, as in the following example:

STUDENT: [the instructor’s comment about the lack of transition] was almost validating because I did want it to be an abrupt transition, but then I also started sort of thinking like, is it an abrupt transition that she didn’t like? So if my reader’s reading this, did they actually enjoy reading that, or…? My immediate reaction was kind of like, “Oh, whatever,” but maybe I do need to look at this and see if that’s something that a reader would enjoy to read.

INTERVIEWER: So have you thought about how you’re going to address it? Are you going to keep it? Do you know yet?

STUDENT: I’ve decided that I’m probably going to keep it in.

INTERVIEWER: Cause you’re cool with it?

STUDENT: Yeah, I’m cool with it. Like as a writer, I feel like that’s sort of my style, like that was something that I meant to do intentionally and I like it. (DB2-15_INT 1)

It appears then that student agency occasionally butts up against a concern about grade and scores and, even more troubling, that grades and scores might impact the kinds of learning that paper comments make possible.


The central purpose of this project was to find out more about what happens between the moment a student receives a comment and the completion of a rough draft, using what students tell us as primary data. The study confirms results of other studies (Calhoon-Dillahunt & Forrest, 2013; Scrocco, 2012; Wingard & Geosits, 2014) in multiple ways: Students welcomed comments; they were eager to figure out what to do about them, and most students were particularly grateful for substantive comments. While other studies suggest substantive comments contribute to improved texts, students in this study also spoke to the ways in which substantive comments did not merely critique or correct their writing, but invited them into conversations about ideas, texts, readers, and their own subject positions as writers. Certain types of substantive comment (questions, confirmations, suggestions) served to validate what students brought to the process (knowledge and goals)—something that was most likely very important to these students early in their first year in college.

The forces that influenced how students thought about comments and what they did with them went well beyond the classroom and included events (occurring during the writing process), past experiences (in life, at school, with writing), short-and long-term priorities (due dates, other classes, career goals), people (relationships and personalities), and social and cultural knowledge, all of which confirm that an individual’s writing process is constantly evolving, infinitely variable, and non-linear (Anson, 2012; Prior, 2016). It is small wonder, then, that even the “best” comments may not result in an improved draft.

Certainly, differences between individual students had an impact on revising processes. For example, one of the most frequent reactions to comments students mentioned in the interviews had to do with confusion about what was being asked and what was expected. The fact that different students handled confusion differently likely involved a number of variables including the relationship with the instructor, personality compatibility, or student self-efficacy. Each of these variables merits more study. In collecting information on students’ backgrounds with writing—their previous experiences in high school backgrounds and their self-awareness about themselves as writers—we had hoped to find correlations between self-efficacy (McCarthy, Meier, & Rinderer, 1985; Pajares, 2003; Bandura, 1994) and how students responded to confusion. Unfortunately, these results were inconclusive, primarily because of limitations on the scope of this study.

Concern about grades had an important impact on the choices students made and why they made them. Despite the importance of assessment, the field has a vexed relationship with grading (Broad, 2003; Huot & O’Neill, 2009; O’Neill, Moore, & Huot, 2009; Smith, 1988; White, 2007), and, as a result, many composition instructors have worked to reduce the omnipotence of the grade in the interest of having students focus on learning. For example, Danielewicz and Elbow (2009), Shor (2009), and Litterio (2016) have made the case for various forms of contract grading, and, in fact, eight instructors in DePaul’s FYW program use contract grading in their courses, although these were not represented in this study. A few instructors in the study used a check, check-plus, check-minus system in lieu of a grade or score on drafts, but students admitted that they immediately translated these symbols into grades. DePaul University’s writing program, in using course-final portfolios that provide students the opportunity to submit multiple samples of revised work for evaluation, provided an option for deferring grades until the end of the term, where a grade could then be determined by revised, student-annotated work. Yet the power of the grade is formidable. Grading is systemic in education, and this study suggests how concerns over grades can compete with actual learning. The students came to their FYW classes with experiences in high school and with standardized testing that privileges the final product (e.g., the final draft) as the representation of what they have learned, and that emphasizes formulaic writing to facilitate rapid scoring: thesis statements at the end of the first paragraph, which are supported by appropriately-cued evidence; rigid approaches to organization; and the ability to reproduce formal academic style and adhere to “standard” conventions of correctness. The assumption is then reinforced that learning to write should be reflected in students’ performance on final drafts where that knowledge can be instantiated and its characteristics identified.

Clearly our understanding of learning how to write has moved well beyond these reductive judgments. See, for example, one of the clearest and most important statements from our field about learning to write: “NCTE Beliefs about the Teaching of Writing,” focusing on the complexity of learning to write (Writing Study Group, 2004). Indeed, at faculty development meetings at DePaul, instructors regularly shared classroom activities that have enabled instructors and students to monitor and enhance learning as it happens with post-writes that accompany drafts, student feedback on instructor comments that discuss what students plan to do, discussions about what specific comments do and don’t do—in other words, ongoing written and oral reflection and communication with the instructor.

The increasing use of reflective portfolios in writing instruction assessment in the US has, to some degree, contributed to a richer, more complex understanding of what students have learned, in large part because reflective portfolios crucially involve the students’ perceptions, often in the form of claims they make about their work, supported by evidence in the portfolios. But if this study is any indicator, portfolio reflections that students prepare at the end of term, and in order to get a grade, may be decidedly limited indicators of much of the learning that goes on in writing courses. One of the most powerful influences—that of students’ motivation to get good grades—can interfere with the process of becoming better writers. In other words, even though the intention for portfolio reflections is to provide students and instructors with an opportunity to examine and assess what students know about good writing in general and their own work in particular, this may be yet another instance of students’ interpretation of “what the teacher wants.”

Certainly, the goal of commenting on student work is to help students learn how to improve their writing, but how should that learning manifest itself?

The interviews themselves were compelling in part because they provided audio snapshots of what could be categorized as “learning in progress”: students in the process of thinking through comments in relationship to what they already knew, what they needed to know and do, and what their goals were at this particular moment in time. To illustrate, I offer three examples that seem to testify to the less “visible” work of learning—observable in the interviews, but not so much in the final drafts. The examples are extended to illustrate the associative, fragmented nature of the thinking:

Student #1

STUDENT: And maybe at first I was a little bit iffy or I questioned, sort of, his method of how he was giving me the feedback. Like, at first I think I said in the first session that I thought he was too vague in his wording, because he asked me, "Be more specific and analyze," but he didn’t specify where exactly I should do that. But now, after doing the revision I understand why he didn’t say. I feel that he wanted me as a writer to sort of challenge myself and search for the flaws in my own writing, which I think is something hard to do, because when you do something you think you gave it your all, you think there can’t be any fixes done to it, but then after you read it again you do question yourself. You’re like, "Oh okay, now I understand where I could have fixed something, or where I could have written a little bit more." (DB2-34_INT 2)

Student #2

INTERVIEWER: And can you tell me a little bit about your revision process in general.

STUDENT: Mmm, well I can say it took me a long time to actually like revise it thoroughly, like I kept rereading the feedback comments that she put on certain parts of my paper, and I thought about what I could do to brainstorm different questions as in like, “Why this?” and “What was the purpose of me putting this sentence in?” “What could lead to another paragraph?” and things like that. It was a long process... [laughs]

(DB2-60_INT 2)

Student #3

INTERVIEWER: So, he [the instructor] didn’t say anything about your introduction, but you started it differently, so why did you?

STUDENT: Yeah definitely, because if you read over here, the summary content needs to be expanded. As it stands, the introductory content is not engaging. So, I tried to make it more engaging by putting “college sucks” at the beginning. It makes it pretty juvenile. I understand that, but if you read it and you understand that I’m not trying to be juvenile, I’m just trying to grab your attention. I want to grab the audience’s attention, and I hope he sees that. So, that’s what I had to do. I also changed a lot of stuff in the first article, if you read it. I changed everything.

INTERVIEWER: In the first paragraph?

STUDENT: Yeah....I changed that whole thing: "However, I believe college provides a student with learning outcomes that go beyond the classroom." That is mine.

INTERVIEWER: Why did you change that?

STUDENT: That’s my thesis statement, and I realized after reading through my original draft that I’d written that at the end, and he pointed it out. So, I was like, Oh, that’s actually a good thesis; I can actually write a lot about that. So, I said, let’s bump that up to the front, and let’s see what I can work with that. And I realized that I changed the whole essay. The second paragraph is this short, like it’s what, seven to eight lines over here. Over here I change it to at least like 15 lines.

INTERVIEWER: So, to bring the thesis statement up to the beginning, was that your choice or is that based on his suggestion?

STUDENT: He asked me where my thesis statement is, and, yes, my thesis statement has to be in there in the introduction ... It’s almost like a rule that he’s brought up, and I think it really helps shape your argument. Like you can understand the difference between these two. Like I could clearly see, like if I have the thesis statement staring at me at the top of every paragraph, you know? I can write the paragraph much better, and he’s completely right. I could see it perfectly. (DB2-12_INT 2)

In these and in many of the interviews, students were problem-solving, even when the problems couldn’t be resolved before the final draft. They worked at reshaping their ideas and their language in response to feedback; they considered, weighed and potentially redefined and reworked their orientation to what they had written; they came to realizations about form and topic as they were thinking things through; they wrestled with the establishment of their own authority in conjunction with or in opposition to the instructors, balancing this with other influences, such as peer feedback, other materials they read, parents, and friends.


There is little doubt that the complexity of the processes involved in writing and revising makes definitive conclusions problematic, and, while the researchers took care to ensure replicability, the results are not generalizable across institutions, programs, and student populations. The problem of generalizability is at the heart of the challenge of commenting on student work. In this case, we were working within a specific context: a private, 4-year, competitive, urban university (rather than a community college, an Ivy League university, or a Research I institution). The primary mission of the university in this study was quality teaching (though research is becoming an important second). As is common in some universities without Ph.D. programs, writing instructors were all adjunct (not graduate teaching assistants or tenure-line faculty), and this particular group worked within a writing curriculum that was intentionally grounded in best practices from the field of writing pedagogy in Composition and Rhetoric.

While students were fairly diverse, they mostly came from urban and suburban college prep high schools; they were highly motivated to do well academically. Further, they were working on their second paper of their first term at college, and it is likely that results and processes would be quite different for a student population of juniors or seniors or even first-year students at the end of the term.  

Although we could draw some inferences about the classroom environment and activities from what students shared with us, we did not account for intentions of individual instructors, the interplay of personalities in the courses, and the specific activities that took place in the classrooms.

Future Directions

Challenges and limitations notwithstanding, commenting on student papers is at the heart of what writing instructors do, enabling them to intervene at crucial moments in the learning of their students when thinking and writing are in flux, questions are emerging, and answers are tentative. What we may be seeing in the interviews (illustrated in the excerpts above) is students in the process of engaging in—and in some cases even developing—habits of mind. In the 2015 issue of the Journal of Writing Assessment, contributors take to task the emphasis of Common Core State Standards (CCSS) on skills and the consequent testing of those skills in static assessment instruments that focus on product. Clark-Oates, Rankins-Robertson, Ivy, Behm, and Roen (2015) proposed using the Framework for Success in Postsecondary Writing instead to shape approaches to assessment in part because of the Framework’s emphasis on habits of mind (curiosity, openness, engagement, creativity, persistence, responsibility, flexibility and metacognition), habits that are fluid, dynamic, and difficult to isolate but which have as much relevance to learning to write in college as they do in secondary education.

One of the ramifications from this study and studies like it is that, in evaluating the success of comments on student papers, we need to reflect on what to look for, where to look, how to look, and then how to articulate what we see in terms of learning to instructors, administrators, and stakeholders.

Author’s Note

Darsie Bowden is Professor Emeritus in the Department of Writing, Rhetoric & Discourse at DePaul University. Although retired, she continues to pursue her scholarly interests in writing pedagogy and writing program administration.


This research project was initiated at the Dartmouth Research Seminar in 2013, and work on the paper itself continued at the Dartmouth Institute in 2016. Thanks to Christiane Donahue and all the colleagues at these gatherings who so generously shared their feedback, support, and expertise. Thanks to Charles Bazerman and Neal Lerner for reviewing early drafts and to the editors and reviewers of the Journal of Writing Assessment for their valuable help in bringing the paper to fruition. I owe a huge debt to my research team, Bridget Wagner, Cari Vos, Kathyrn Martin, Jeff Melichar, Lauri Dietz, Mathew Fledder-Johan, Matthew Pearson, Amanda Gaddam, and to the writing tutors at DePaul University’s Center for Writing-Based Learning for interview transcriptions. Research was funded by the CCC Research Initiative (2013-2014) and two DePaul University Research Grants (2014, 2015) and supported by DePaul’s Social Science Research Center. I reserve special thanks for members of DePaul’s First-Year Writing Program and to the 47 DePaul students who shared, with candor and grace, their thoughts, feelings, and goals as they engaged in the difficult work of becoming better writers.

[1] Whether this result was influenced by the additional attention on instructor comments prompted by this research project itself (especially in the interviews) or it was what students would do anyway, is difficult to say definitively. Based on the students’ willingness to talk about the comments and the facility with which they generated at least preliminary ideas about what they might do in response to comments, one could make a fairly strong case that students did indeed pay careful attention to comments.


Anson, C. (1999). Talking about text: The use of recorded commentary in response to student writing. In R. Straub (Ed.), A sourcebook for responding to student writing (pp. 165-174). Cresskill, NJ: Hampton Press.

Anson, C. (2012). What good is it? The effects of teacher response on students’ development. In N. Elliot & L. Perelman (Eds.), Writing assessment in the 21st century: Essays in honor of Edward M. White. (pp. 198-202). New York, NY: Hampton Press.

Anson, C. M. (2016). “She really took the time”: Students’ opinions of screen-capture response to their writing in online courses. In C. Weaver & P. Jackson (Eds.), Writing in online courses: Disciplinary differences. Norwood, MA: Hampton Press.

Auten, J. G. (1991). A rhetoric of teacher commentary: The complexity of response to student writing. Focuses, 4(1), 3-18. 

Bandura, A. (1994). Self-efficacy. In V. S. Ramachandran (Ed.), Encyclopedia of human behavior (Vol. 4, pp. 71-81). New York, NY: Academic Press.

Berzsenyi, C. A. (2001). Comments to comments: Teachers and students in written dialogue about critical revision. Composition Studies, 29(2), 71-92.

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, UT: Utah State University Press.

Calhoon-Dillahunt, C., & Forrest, D. (2013). Conversing in marginal spaces: Developmental writers’ responses to teaching comments. Teaching English in the Two-Year College, 40(3), 230-247.

Christiansen, M. S., & Bloch, J. (2016). “Papers are never finished, just abandoned”: The role of written teacher comments in the revision process. Journal of Response to Writing, 2(1), 6-42.

Clark-Oates, A., Rankins-Robertson, S., Ivy, E., Behm, N., & Roen, D. (2015). Moving beyond the common core to develop rhetorically based and contextually sensitive assessment practices. Journal of Writing Assessment, 8(1). Retrieved from http://journalofwritingassessment.org/article.php?article=88

Cope, B., Kalantzis, M., McCarthy, S., Vojak, C., & Kline, S. (2011). Technology-mediated writing assessments: Principles and processes. Computers and Composition, 28(2), 79-96. Retrieved from http://www.sciencedirect.com/science/article/pii/S8755461511000284

Council of Writing Program Administrators; National Council of Teachers of English; National Writing Project. (2010). Framework for success in postsecondary writing. Retrieved from http://wpacouncil.org/framework

Danielewicz, J., & Elbow, P. (2009). A unilateral grading contract to improve learning and teaching. College Composition and Communication, 61(2), 244-268.

Dohrer, G. (1991). Do teachers’ comments on students’ papers help? College Teaching, 39(2), 48-54.

Emig, J. (1968). The composing processes of twelfth graders. Urbana, IL: National Council for Teacher of English.

Ferris, D. (1995). Student reactions to teacher response in multiple-draft composition classrooms. Teachers of English to Speakers of Other Languages Quarterly, 29(1), 33-35.

Ferris, D. (2003). Student views on response. Response to student writing: Implications for second language students (pp. 92-116). Mahwah, NJ: Erlbaum Associations.

Ferris, D. (2006). Does error feedback help student writers? New evidence on the short- and long-term effects of written error correction. In K. Hyland & F. Hyland (Eds.), Feedback in second language writing: Contexts and issues (pp.81-104)New York: Cambridge University Press.

Fife, J. M., & O’Neill, P. (2001). Moving beyond the written comment: Narrowing the

gap between response practice and research. College Composition and Communication, 53(2), 300-321.

Glaser, B. G. and Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Chicago IL: Aldine.

Haswell, R. (2006). The complexities of responding to student writing; or, looking for shortcuts via the road to excess. Across the Disciplines, 3. Retrieved from http://wac.colostate.edu/atd/articles/haswell2006.cfm

Hyland, K., & Hyland, F. (2006). Feedback on second language students’ writing. Language Teaching, 39(2), 83-101.

Huot, B., & O’Neill, P. (2009). Assessing writing: A critical sourcebook. Boston, MA: Bedford/St. Martin’s.

Kim, L. (2004). Online technologies for teaching writing: Students react to teacher response in voice and written modalities. Research in the Teaching of English, 38(2), 304-337.

Litterio, L. (2016). Contract grading in a technical writing classroom: A case study. Journal of Writing Assessment, 9(2). Retrieved from http://journalofwritingassessment.org/article.php?article=101

McCarthy, P., Meier, S., & Rinderer, R. (1985). Self-efficacy and writing: A different view of self-evaluation. College Composition and Communication, 36(4), 465-471. Retrieved from http://www.jstor.org/stable/357865

Nystrand, M. (1986). Dialogic discourse analysis of revision in response groups. In E. Barton & G. Stygall (Eds.), Discourse studies in composition (pp. 377-392). Cresskill, NJ: Hampton Press.

O’Neill, P., Moore, C., & Huot, B. (2009). A guide to college writing assessment. Logan, UT: Utah State University Press.

Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading & Writing Quarterly, 19(2), 139-158.

Prior, P. (1995). Tracing authoritative and internally persuasive discourses: A case study of response, revision, and disciplinary enculturation. Research in the Teaching of English, 29(3), 288-325.

Prior, P., & Shipka, J. (2003). Chronotopic lamination: Tracing the contours of literate activity. In C. Bazerman & D. Russell (Eds.), Writing selves/writing societies (pp. 180-238). Retrieved from http://wac.colostate.edu/books/selves_societies/

Prior, P. (2016, August). The units-of-analysis problem for writing research: Tracing laminated chronotopic trajectories of becoming a biologist. Paper presented at the Dartmouth Symposium: “College Writing” From the 1966 Dartmouth Seminar to Tomorrow. Hanover, NH.

Scrocco, D. L. A. (2012). Do you care to add something? Articulating the student interlocutor’s voice in writing response dialogue. Teaching English in the Two-Year College, 39(3), 274-292.

Shor, I. (2009). Critical pedagogy is too big to fail. Journal of Basic Writing 28(2), 6-27.

Silva, M. L. (2012). Camtasia in the classroom: Student attitudes and preferences for video commentary or Microsoft Word comments during the revision process. Computers and Composition, 29(1), 1-22.

Smith, F. (1988). Joining the literacy club: Further essays into education. Portsmouth, NH: Heinemann, 132-136.

Sommers, J. (2012). Response rethought…again: Exploring recorded comments and the teacher-student bond. Journal of Writing Assessment, 5(1). Retrieved from http://journalofwritingassessment.org/article.php?article=59

Sommers, N. (2000). Students’ perceptions of teacher comments. In R. Straub (Ed.), The practice of response (pp. 261-274). Cresskill, NJ: Hampton Press.

Sommers, N. (Director). (2005). Across the drafts: Students and teachers talk about feedback [DVD]. Princeton, NJ: Telequest.

Sommers. N. (2006). Across the drafts. College Composition and Communication, 58(2), 248-257.

Sommers, N. (Director). (2012). Beyond the red ink: Students talk about teachers’ comments [DVD]. Boston, MA: Bedford St. Martin’s.

Sperling, M., & Freedman, S. W. (1987). A good girl writes like a good girl: Written responses to student writing. Written Communication, 4(4), 343-69. 

Spinuzzi, C. (2008). Network: Theorizing knowledge work in telecommunications. New York, NY: Cambridge University Press.

Straub, R. (1999). A sourcebook for responding to student writing. Cresskill, NJ: Hampton Press.

Straub, R. (2000). Students’ perceptions of teacher comments. In R. Straub (Ed.), The practice of response: Strategies for commenting on student writing (pp. 261-274). Cresskill, NY: Hampton Press.

Wingard, J., & Geosits, A. (2014). Effective comment and revisions in student writing from WAC courses. Across the Disciplines, 11. Retrieved from http://wac.colostate.edu/atd/articles/wingard_geosits2014.cfm

White, E. M. (2007). Assigning, responding, evaluating: A writing teacher’s guide (4th ed.). Boston, MA: Bedford/St. Martin’s.

Writing Study Group of the NCTE Executive Committee. (2004). NCTE beliefs about the teaching of writing. Retrieved from http://www.ncte.org/positions/statements/writingbeliefs

Ziv, N. (1984). The effect of teacher comments on the writing of four college freshmen. In R. Beach & L. Bridwell (Eds.), New directions in composition research (pp. 362-380). New York, NY: Guilford.