Volume 5, Issue 1: 2012

College Students' Use of a Writing Rubric: Effect on Quality of Writing, Self-Efficacy, and Writing Practices

by Amy E. Covill

Abstract
Fifty-six college students enrolled in two sections of a psychology class were randomly assigned to use one of three tools for assessing their own writing: a long rubric, a short rubric, or an open-ended assessment tool. Students used their assigned self-assessment tool to assess drafts of a course-required, five-page paper. There was no effect of self-assessment condition on the quality of students' final drafts, or on students' self-efficacy for writing. However, there was a significant effect of condition on students' writing beliefs and practices, with long rubric users reporting more productive use of self-assessment than students using the open-ended tool. In addition, across conditions, most students reported that being required to assess their writing shaped their writing practices in desirable ways.

Keywords: rubrics, self-efficacy, self-assessment, working memory, writing quality, writing beliefs, college writers


Most educators believe that assigning writing is among the best pedagogical practices across disciplines at the college level (Vacca & Vacca, 2000). When students write in a disciplinary-specific way, they learn more about the discipline and they deepen their understanding of course material (Newell, 2006). For teachers, the challenge is to support students' writing in a meaningful way that is also practical given typical college class sizes of thirty or more students. In most cases, class size dictates that students need to have tools to help themselves create quality writing; typically the teacher cannot provide extensive, individual writing support to each student in his or her class.

Another best pedagogical practice is using methods that are supported by research. A commonly recommended method of supporting student writing at all levels of education is to provide students with an instructional writing rubric. While there is an extensive literature on teachers' and administrators' use of rubrics to assess student writing for grading and placement purposes (Broad, 2003, 2000; Huot, O'Neill, & Moore, 2010), there is little empirical research, especially at the college level, examining the effect of students' use of an instructional rubric on students' writing beliefs, practices, and performance.

A writing rubric contains a list of criteria that are relevant to producing effective writing (Andrade, 2000). For each criterion in a rubric, four or six levels of quality are typically described (Arter & Chappuis, 2007). Instructional rubrics "help students understand what is wanted on an assignment, help students understand what a quality... product looks like, [and] enable students to self-assess" (Arter & Chappuis, 2007, p. 31). Proponents of the use of rubrics believe they are useful at all levels of schooling, including at the college level (Quinlan, 2006).

The number of criteria contained in rubrics varies. Most rubrics include six or seven criteria (see, e.g., Andrade, Wang, Du, & Akawi, 2009, and Rawson, Quinlan, Cooper, Fewtrell, & Matlow, 2005), while at least one author recommends use of a rubric with fifteen criteria (Whitcomb, 1999). Others argue that the number of criteria to include on any rubric depends on how the rubric will be used (Arter & Chappuis, 2007), and the complexity of the rubric-related task (Quinlan, 2006). Popham (1997) recommends the inclusion of three to five criteria, arguing that lengthy, highly detailed rubrics are impractical.

The present study examines the possibility that providing students with an instructional rubric, i.e., with explicit criteria for their writing, improves writing performance. Both social cognitive theory and cognitive theory can be used to explain how the use of rubrics might enhance writing performance. In brief, according to social cognitive theory, rubrics could boost a writer's self-efficacy, thereby boosting motivation and writing performance. According to cognitive theory, use of a rubric might improve writing performance because a rubric may facilitate cognitive processing while writing.

Theoretical Framework



Social Cognitive Theory
Bandura (1986) proposed a social cognitive theory in which "human functioning is explained in terms of" interactions among "behavior, cognitive and other personal factors, and the environment" (p. 18). An important "cognitive factor" is perceived self-efficacy, or one's "judgment of one's capability to accomplish a certain level of performance" on a task (Bandura, 1986, p. 391). According to Bandura (1986, 1994), perceived self-efficacy plays a large role in motivation, perseverance, and consequently, performance. If one's perceived self-efficacy for a task is high, his or her motivation, persistence, and achievement will be high. Another "source of motivation relies on goal setting and self-evaluat[ion]... [and] requires standards against which to evaluate ongoing performance" (Bandura, 1986, p. 467).

Providing students with a rubric could enhance students' writing performance by increasing students' self-efficacy for writing. When students are given the specific criteria that are important for success and descriptions of what success looks like, they may be more confident in their abilities to succeed on a writing task and encouraged to work harder. Rubrics could also increase motivation (and effort) by helping students set explicit goals for their writing and by giving them standards to use for self-assessment. Increased motivation and effort may, in turn, boost performance.

Cognitive Theory
One of the most influential cognitive models of proficient writing was first proposed by Hayes and Flower (1980). Hayes' (2000) more recent model specifies writing-relevant elements of long-term memory, and continues to recognize effective management of limited working memory capacity as central to proficient writing. Flower (1994) emphasizes the connection between writing proficiency and the writer's understanding of the context for a particular act of writing. Like Bandura, Flower sees thinking and behavior as greatly influenced by the context in which it occurs.

In terms of the context for writing, Flower (1994) argues that writing is a "literate act" that involves constructing meaning within a "discourse community" that has specific rules and "conventions" (p. 9). These conventions concern such things as "what ideas matter" and "what readers expect and need" (p. 22). When faced with a writing task, a writer uses problem solving, "guided by the writer's goals and awareness," to "respon[d] to a rhetorical situation (as they interpret it)" (p. 24). A rubric could help students identify the conventions of the rhetorical situation they are faced with and appropriate goals for their writing. The rubric could provide a needed scaffold for constructing meaning that is appropriate for a given situation. For example, if a writer must produce a persuasive text, a rubric that describes what a persuasive text does for the reader would prompt the writer to think about this aspect of the rhetorical situation.

Hayes (2000) also recognizes the effect of context or "the task environment" on writing (p. 10). For student writers, Hayes' task environment includes the "instructional context" (McCutchen, 2006, p. 115). (When a teacher offers students a rubric, the rubric is part of this instructional context.) According to Hayes' (2000) model, writing-relevant knowledge from long-term memory, such as task schemas and topic knowledge, is retrieved and used with writing-related processes (e.g., reflection, including problem solving and decision making) in working memory to produce or revise text (Hayes, 2000). Given Hayes' model of writing, it is reasonable to predict that providing students with an instructional rubric as part of writing instruction could improve the quality of students' writing in at least one of two ways.

First, rubrics may benefit students by altering their knowledge in long-term memory about what constitutes good writing (either assignment-specifically or more generally). Student writers, even at the college level, may need help identifying appropriate goals for writing (Deane, Odendahl, Quinlan, Fowles, Welsh, & Bivens-Tatum, 2008; MacArthur, 2007). A faulty or incomplete representation of goals for writing would result in an impoverished "task schema," which would, in turn, hinder effective text production and reflection. Other writing researchers have emphasized the importance of having appropriate writing goals in mind when students are faced with the task of reviewing (evaluating and revising) their drafts (Flower, Hayes, Carey, Schriver, & Stratman, 1986; Covill, 2010b). Effective reviewers are able to compare their current draft with their rhetorical goals, revising whenever the text fails to align with goals (Freedman, 1985). Thus, a faulty or incomplete representation of goals for writing may seriously impede a writer's efforts to write and review effectively.

Second, rubric use may improve the quality of students' writing by reducing the need for writers to store criteria for their writing in working memory while simultaneously carrying out the processing needed to address those criteria. Rubrics act as an external representation of appropriate criteria for a writing task, which could facilitate the process of reflection during writing. A rubric would reduce the amount of knowledge that must be held in working memory while composing, freeing up working memory resources to allow for more effective problem solving and decision making during text production (Holliway & McCutchen, 2004; McCutchen, 2006).

In sum, according to cognitive theory, rubrics may scaffold students' interpretation of what is expected of them in a particular rhetorical situation (Flower, 1994), may enhance students' knowledge about effective writing (their "task schema" [Hayes, 2000, p. 10]), and might facilitate planning, problem solving, and decision-making during writing and revising (i.e., "reflection" [Hayes, 2000, p. 10]) by reducing working memory load.

Empirical Research



Effect of Rubric Use on Writing Quality
In several studies at various levels of education, researchers have explored whether students write a better final draft if they are given an instructional rubric. Results of these studies have been mixed, and most have been conducted in middle school settings.

To my knowledge, only three studies consider rubric use at the college or graduate level of writing instruction. First, Andrade and Du (2005) interviewed fourteen undergraduates who were required to use a rubric for formally assessing their written work in a 200-level educational psychology class. These students reported that rubrics were useful because they made clear what the teacher expected, they helped students to "plan an approach to an assignment" (Andrade & Du, 2005, p. 4), and they were used as a reference that guided students' reviewing and revising of their work. These students reported producing written work of higher quality because of their rubric use, however the quality of their written work was not objectively assessed, and there was no comparison group to tease apart the effects of requiring self-assessment from the effects of requiring self-assessment using a rubric.

The second study of rubric use at the college or graduate level of instruction was carried out in a veterinary school. Six expository writing assignments by 83 veterinary school students were evaluated by their instructors using the same rubric each time. Students showed significant improvement on four of the six rubric criteria from the first to second assignment (Rawson et al., 2005). They improved on "thoroughness," "conciseness," "logical organization," and "use of appropriate medical terminology" (p. 236). After the second assignment, improvement occurred only on "use of appropriate medical terminology" (p. 236). A limitation of this study was that there was no comparison group to determine whether improvements were attributable to the rubric, or whether practice with writing in this genre, or increased knowledge of course material, could explain improvements.

The third study of rubric use at the college or graduate level of instruction compared college students' attitudes toward writing assessment in three sections of a writing intensive course in which assessment rubrics varied in "the extent of elaboration and emphasis on ...  critical thinking" (high, moderate, and low) (Morozov, 2011, p. 14). Students given the more extensive rubric emphasizing critical thinking had more positive attitudes toward assessment than students given a more minimal rubric. Morozov (2011) suggests that a more substantial rubric may "clarif[y] the learning path for students" (p. 24). Morozov (2011) did not examine the effects of rubric type on the quality of students' writing.

While research with students in early adolescence may be only tangentially related to student writers in late adolescence (i.e., college students), the middle school research is described in this review in order to more fully represent the existing literature in this area.

At the middle school level, using a quasi-experimental approach, Andrade (2001) found that eighth graders who wrote with reference to a rubric produced only one writing assignment (the second assignment of three) that was higher quality than writing by eighth graders who did not receive a rubric. Andrade (2001) acknowledges that genre may explain the inconsistent effect of rubric use because each of the three assignments included in the analysis involved a different genre.

Andrade and Boulay (2003) compared seventh and eighth graders who were given a rubric plus instructions and support for using the rubric to assess their own writing (treatment group), to those who were simply given a rubric to use independently (control group). For the first of two papers these students wrote, there was an effect of condition only for the girls (treatment girls wrote better papers than control girls).

Schirmer and Bailey (2000) provided rubrics to 5th and 7th grade deaf students. They found that students' written work improved on some of the rubric criteria, but in this study there was no comparison group.

Effect of Self-Efficacy on Writing Quality
The predictive relationship between self-efficacy and writing achievement is well-documented for college level students (Shell, Murphy, & Bruning, 1989; McCarthy, Meier, & Rinderer, 1985). Self-efficacy predicts the level of effort students expend on academic tasks like writing, and their level of perseverance when faced with difficulty (Hidi & Boscolo, 2006; Klassen & Welton, 2009). This increased effort results in higher quality writing (Pajares, 2003; Pajares & Valiante, 1997).

Effect of Rubric Use on Self-Efficacy
While there is research on rubric use and writing quality, and on the relationship between self-efficacy and writing quality, there is very little research on rubric use and self-efficacy. It appears that the only research examining this relationship is a quasi-experimental study of late-elementary and middle school students (Andrade et al., 2009). In this study there was a significant effect of rubric use on self-efficacy only for girls. (Unfortunately, writing quality was not assessed in this study, so any effect of rubric use on writing quality was not examined.)

There is some evidence that rubric use affects students' anxiety and confidence, affective states that relate to self-efficacy. At the college level, writers who are given a rubric believe that rubrics reduce their anxiety about a writing assignment (Andrade & Du, 2005), and report more confidence in their ability to write effectively (Covill, 2010a). More generally, in their review of research on self-regulated learning, Paris and Paris (2001) report that providing students with guidance for assessing their own learning relates positively to students' self-efficacy.

Effect of Rubric Length on Students' Beliefs
There appears to be only one study of students' perceptions of a rubric based on the number of criteria in the rubric (Morozov, 2011). As described above, Morozov's (2011) quasi-experimental study revealed that college students in sections of a writing-intensive course who were given a fuller rubric had more positive attitudes toward the rubric than students who were in sections of the course in which the instructor offered a rubric with fewer criteria. There appears to be no research on the effect of rubric length on writing self-efficacy or on writing quality.

Rubrics, Self-Efficacy, and Writing Quality



In the present study, an attempt was made to build upon previous research in several ways. First, the effect of rubric use on both students' self-efficacy and their writing achievement was considered. As noted above, previous studies examined the effect of rubric use on either self-efficacy or writing quality, but not both. Second, how students use self-assessment, especially how they use a rubric, and their beliefs about self-assessment, were explored. Third, rather than the quasi-experimental approach used in previous research, random assignment to condition was used. Finally, unlike in many previous studies, there was a non-rubric-using comparison group.

The following questions were the primary focus of the study. First, do rubric-using college student writers outperform writers who are not given a rubric to guide their writing ? Second, do rubric-using college students have higher self-efficacy for writing? Third, how does use of a rubric effect college students' self-reported writing practices and beliefs? A secondary question was whether the length of a rubric has an effect on students' writing quality, self-efficacy, or rubric use.

Method



Participants
All sixty students who enrolled in two sections of the 200-level "Early Child Development" psychology course agreed to participate, however, data for four students were ultimately not included in the analyses. (Two of these students did not complete the assigned paper, the third did not complete the surveys, and the fourth student was absent when most of the data were collected.) The author taught both sections of the course. The course was offered at a public university in the northeastern United States that enrolls students of average academic proficiency. Fifty-one of the participants were females and five were males.

Procedure
Participants were randomly assigned to use one of three self-assessment tools as they worked to complete a 5-page paper requiring analysis and application of course material. (See Appendix A for the paper assignment guidelines given to students.) The self-assessment tools were a long rubric with eleven criteria (see Appendix B), a short rubric with five criteria (see Appendix C), and an open-ended assessment. The quality of the final draft of the paper was assessed to see if differences in quality were related to the use of a particular self-assessment tool. Students' self-efficacy for writing was also assessed to see if it was affected by the type of assessment students were assigned to use. (See Appendix D for the self-efficacy measure.) Finally, how students used their assigned self-assessment tool was explored using a survey. (See Appendix E for this survey instrument.)

The long rubric assessment tool contained four criteria that were assignment-specific and seven criteria that a teacher might assume students at the college level would automatically consider as they complete any written assignment (e.g., "Organization," "Sentence clarity," and "Mechanics"). The short rubric contained the same first three assignment-specific criteria that were included in the long rubric, plus "Mechanics." The other criterion on the short rubric, "Overall clarity of writing/organization," was simply a combination of two criteria on the long rubric. The open-ended assessment tool instructed students to "please list the 3 strongest aspects of your paper. (For example, 'The paper shows my understanding of some major topics that we covered in this class.' Or, 'It is clearly written.')." Students were also asked to "please list the 3 aspects of your paper that you wish were better." The three conditions allowed for a test of whether a longer, more detailed rubric was impractical compared to a short rubric, and whether self-assessment using a rubric is more effective for college students than a relatively unguided self-assessment.

As noted above, participants in each section of the course were randomly assigned to one of three self-assessment conditions: Long Rubric (n=19), Short Rubric (n=18), or Open-Ended Assessment (n=19). While using random assignment is a stronger design than a quasi-experimental approach, a drawback of using random assignment in this study is that participants in one condition could interact with participants in the other conditions. For example, a participant assigned to use open-ended assessment could have obtained a rubric from a participant in one of the rubric-using conditions, and then assessed their paper in both ways (in the open-ended way as required for the assignment and with reference to the rubric.) There is no evidence that this occurred, and, in fact, using a comparison condition that involved a treatment (Open-Ended Assessment), rather than using an empty control condition, may have prevented this kind of diffusion. Because all students were assigned to self-assess, it is less likely that they would seek out additional self-assessment work.

During the third week of a fifteen week semester, informed consent to participate was obtained from students who volunteered to participate. In order to preserve the integrity of the research, the instructor did not express to participants at any time during the semester any opinions about the relative usefulness of the three assessment tools.

The assignment description for the five-page writing assignment was provided to students in Week 4. In brief, the paper assignment required each student to obtain a children's book appropriate for a particular age range to which he or she was assigned (birth to two years, two to four years, five to seven years, or eight to ten years). Students were then required to apply what they had learned about child development during the semester to their analysis of the book's content (and construction, if applicable), explaining why the book is appropriate for the designated age range.

Attached to the assignment guidelines were three copies of whichever assessment tool each student was assigned to use as they completed their paper. Specifically, depending on their condition assignment, students were given one copy of either the long rubric, short rubric, or open-ended assessment tool to be used to assess a sample paper written by a former student. They were given a second copy of the same assessment tool to be used to assess their own first draft of the assigned paper. The third copy of the same assessment tool was to be used to assess the final draft of their own paper.

During Week 6 of the semester, students were arranged in small groups according to the condition they were in and were instructed to assess a sample student paper that was of average quality. They were encouraged to discuss their assessment with others in their group who were using the same assessment tool. This practice assessment was done to draw students' attention to the paper requirements and to give them practice with the assessment tool they were assigned to use.

In Week 11 of the semester, students handed in the first draft of their paper and their assessment of it (completed as homework) using the self-assessment tool that they had been assigned to use. At this time, participants also completed a writing survey in which they were instructed to rate their self-efficacy for completing the assigned paper for this class. This survey was based on the survey used by Andrade et al. (2009). Students were asked to rate their confidence level on a 100-point scale (following Pajares, Hartley, & Valiante, 2001) for 12 aspects of writing. For the analyses, the self-efficacy score for each participant was an average of the 12 ratings.

In Week 13, students handed in the final draft of their paper with an assessment of it (completed as homework). They also completed the "Use of Self-Assessment in Writing" survey created by the author, which contains fourteen Likert-type items. The first ten items were offered to participants in all conditions. (Note that the first nine items used the typical "Strongly Disagree" to "Strongly Agree" scale while the tenth item used a "Never" to "Always" scale.) The last four items on the survey were only offered to participants in the two rubric conditions. For the analyses, scores were assigned on a five-point scale, e.g., a participant obtained a score of "4" if they circled "Agree" for an item.

To determine writing quality, the author, and a second rater who was blind to the design and purposes of the study, independently rated all 56 final drafts of the assigned paper. Ratings were determined holistically (White, 1988; Huot, 1990), following the "A through F" (including "+" and "-"), 11-point scale used in most classrooms. In preparation for rating the papers, the author and the blind rater participated in a norming session using five papers randomly selected from all of the writing samples. When discussing the quality of the papers, the author and the rater considered the criteria contained in the long rubric.

After rating all 56 papers, the raters discussed the quality of the 10 papers on which their ratings were most discrepant. Discussions resulted in revised ratings by one or both raters. Inter-rater reliabilities for all 56 writing samples, both before and after discussion of the 10 most controversial samples, are provided in Table 1.

Table 1. Inter-rater Reliability of Ratings on 56 Writing Samples

Before discussion of discrepanciesAfter discussion of discrepancies
Pearson's r.62.81

Note. Correlations are significant at the .01 level.

For purposes of the analyses, letter ratings were converted to numbers (A=95, A-=93, B+=87, B=85, etc.). The "writing quality" score for each paper was an average of the rating given by the author and the rating given by the blind rater.

Results



Design and Analyses
The independent variable is Self-Assessment condition (Long Rubric, Short Rubric, or Open-Ended). Dependent variables are writing quality (the quality of the final drafts), self-efficacy for writing, and students' responses to the Use of Self-Assessment survey. A decision to use parametric statistics was made after ensuring that none of the assumptions associated with the use of Multivariate Analysis of Variance (MANOVA) were violated (normality, linearity, outliers, multicollinearity, and homogeneity of variance-covariance matrices). Two MANOVAs were used to determine whether there were any effects of condition. The central MANOVA included all of the dependent measures except for questions 11-14 from the Use of Self-Assessment survey. These last four items had to be analyzed separately because they were offered to participants in only two of the three conditions (students in the Long Rubric and Short Rubric conditions). An alpha level of .05 was used for all statistical tests.

The MANOVA conducted to examine the effect of condition on writing quality, self-efficacy, and use of self-assessment (items 1-10) was significant (Wilks' Lambda=.427, F(24, 84)=1.86, p<.05, ηp2=.35). Follow-up analyses indicated that there was no effect of condition on writing quality or self-efficacy (see Table 2.). There was a significant effect of condition on self-reported use of self-assessment (see Table 3). Observed power was high for the overall analysis (.97), and ranged from .13 to .85 in the follow-up analyses.

Table 2. MANOVA Results for Writing Quality and Self-Efficacy

Condition

Dependent MeasureLong Rubric
M
(SD)
Short Rubric
M
(SD)
Open-Ended
M
(SD)
Fdfpηp2
Writing Quality78
(7.7)
80.5
(7.9)
78
(9.9)
.542, 53.59.02
Self-Efficacy85
(9.60)
81.5
(12.2)
85.5
(9.8)
.772, 53.47.03


"Use of Self-Assessment in Writing" Survey
Rubric users (long and short).
Analysis of responses to items 11-14 revealed that 28 of the 36 rubric users (both Long and Short) (78%) referred to the rubric when planning, and when writing the rough draft, and 34 of the rubric users (94%) referred to the rubric when revising. All 36 rubric users reported reading the entire rubric by the time they completed the final draft of the writing assignment.

Long rubric users are different from users of the open-ended assessment.
There was a significant effect of condition for items 1, 3, 4, and 7 on the "Use of Self-Assessment in Writing" survey. These results are shown in Table 3.

Table 3. Effect of Condition on "Use of Self-Assessment in Writing" Survey Responses

Condition

Survey ItemLong Rubric
M
(SD)
Short Rubric
M
(SD)
Open-Ended
M
(SD)
Fdfpηp2
1. The self-assessment I had to complete for the rough draft of my paper helped me to write a better rough draft.3.9*
(.66)
3.4
(1.15)
3.05*
(.97)
3.792, 53.03.13
2. The self-assessment I had to complete for the rough draft of my paper helped me to write a better final draft.3.7
(.99)
3.8
(1.11)
3.8
(.92)
.012, 53.99.001
3. The self-assessment I had to complete for the final draft of my paper helped me to write a better final draft.3.6*
(1.0)
3.55
(.92)
2.9*
(.88)
3.522, 53.04.12
4. In general, the kind of self-assessment I had to do in this class might be helpful to do for other papers in other classes.4.3*
(.81)
3.7
(.67)
3.4*
(.96)
5.722, 53.01.18
5. Having to assess my own writing made me work harder on the paper than I otherwise would have.3.7
(.93)
3.8
(.88)
3.6
(.77)
.282, 53.76.01
6. Having to assess my own writing made me more aware of what I needed to do to revise my rough draft of my paper.4.3
(.58)
4.1
(.64)
3.7
(.93)
2.952, 53.06.10
7. Having to assess my own writing made me more aware of what I needed to do to write a good paper.4.3*
(.67)
4.1
(.68)
3.7*
(.73)
3.392, 53.04.11
8. Having to assess my own writing forced me to set specific goals for my writing of this paper.4.0
(.67)
3.9
(.76)
3.6
(1.1)
.942, 53.40.03
9. Having to assess my own writing reduced my anxiety about successfully writing the paper.3.5
(.96)
2.9
(1.0)
3.1
(.66)
1.82, 53.18.06
10. Think about your high school and college experiences. How often have your teachers given you a rubric to use when you have a writing assignment?3.6
(.83)
4.0
(.84)
3.4
(1.1)
2.12, 53.13.07

Note. For items 1-9 (n=56), 1=Strongly Disagree, 2=Disagree, 3=Maybe/Not Sure, 4=Agree, and 5=Strongly Agree. For item 10 (n=56), 1=Never, 2=Rarely, 3=Sometimes, 4=Usually, and 5=Always. Asterisks indicate which means are significantly different from each other for each item using Tukey's HSD.


The second MANOVA revealed that there was no effect of condition for items 11 through 14 on the "Use of Self-Assessment in Writing" survey (Wilks' Lambda=.81, F(4, 31)=1.87, p=.14, ηp2=.19).

Across all assessment conditions.
Responses to the Use of Self-Assessment survey were analyzed collapsing across the three conditions (n=56). Across conditions, 37 students (66%) reported that having to assess their own writing made them work harder than they would have if they were not required to self-assess, 47 students (84%) believed that self-assessment made them more aware of revisions they needed to make, and 38 students (68%) believed that having to assess their rough draft helped them write a better final draft. Also, 44 students (79%) reported that having to assess their own writing forced them to set goals for their writing. Less than half the group (25 students or 45%) were equivocal (responded "Maybe/Not Sure") about whether self-assessment reduced their anxiety about writing successfully while twelve (21%) disagreed that self-assessment reduced anxiety. (See Appendix F for survey responses on all survey items for all participants.)

Discussion



Recall that one of the primary questions explored in this study is whether rubric-using, college-level writers produce more effective writing than non-rubric users. Results of this study suggest that they do not. Students who were given an instructional rubric for a writing assignment in a college psychology class did not write better papers than students who were required to simply identify the strengths and weaknesses of their own paper. The writing quality of students, regardless of condition, was in the C+/B- range. A secondary question is whether length of the rubric matters when it comes to writing effectiveness. Again, results suggest that the answer is no. Students who were directed to consider eleven criteria for evaluating their writing did not write better (or worse) papers than students who were to consider only five criteria.

One plausible reason why rubric users did not write higher quality final drafts than students using open-ended assessment may be that by the time students are in college, many have reasonably good knowledge of criteria and goals relevant to academic writing in general, and they can surmise assignment-specific criteria from a well-written assignment sheet. In terms of cognitive theory, for relatively familiar academic writing, the rubric may be unnecessary for building one's task schema in long-term memory (Hayes, 2000), or for defining the rhetorical situation (Flower, 1994). This possibility is supported by a review of the assessments that open-ended assessment users made of their drafts in this study. Many of the strengths and weaknesses identified by these students reflect criteria contained in the long rubric. Additionally, many of the students in this study report that in high school and college, they have "always" (17%) or "usually" (42%) been given a rubric. It may be that these college students' extensive previous experience with writing rubrics has caused them to "carry around in their heads the definition of good work found in [a] rubric" (Brookhart, 2004, p. 10). Again, previous experience may have led to a well-developed task schema for many of the students in this study.

Another central question explored in this study is whether providing students with an instructional rubric results in higher self-efficacy for writing. Results suggest that giving students a rubric did not boost their beliefs about their ability to write effectively, compared to students who were required to identify and describe the strengths and weaknesses of their own paper. In addition, students using a long rubric did not have higher self-efficacy than students using a short rubric. Previous research suggested that females may be more susceptible than males to an increase in writing self-efficacy based on being given a rubric (Andrade et al., 2009). Note that participants in the present study were almost all females, and yet no effect of rubric use on self-efficacy was observed. Perhaps by the time students are in college, their familiarity with the kind of academic writing required in this study makes them fairly sure they can succeed, with or without having access to explicit criteria to consider when writing. Indeed, these students had fairly high self-efficacy for the writing assignment: Averaging across conditions, students rated their self-efficacy for the present writing task as 84 on a scale of 1-100. If these students had more modest self-efficacy for this task, there might have been "room" for the rubric to have an effect. As noted below, future research should test the effects of a rubric for an assignment that is relatively unfamiliar to students and perceived by students as more challenging.

The third question considered in this study is how use of a rubric effects students' self-reported writing beliefs and practices. The majority of students who were given a rubric (either long or short) reported referring to the rubric throughout the composing process: They used it to plan, to draft, and to revise. In terms of a cognitive model of writing, this external representation of relevant criteria could relieve students of having to hold in working memory "what matters" while simultaneously carrying out processing necessary for composing (Flower, 1994; Hayes, 2000). Less need for storage in working memory allows for more extensive processing, e.g., more extensive problem solving and decision making associated with reflection (Hayes, 2000). Students' extensive reliance on the rubric in this study is contrary to the finding by Andrade and Du (2005) that undergraduates report not even reading the entire rubric they were given. In the present study, one hundred percent of the rubric-using study participants reported reading the entire rubric by the time they handed in their final draft.

Certain beliefs and practices of the long rubric users, but not the short rubric users, were significantly different from the beliefs and practices of open-ended assessment users. If a teacher is going to provide a rubric to his or her students, a long rubric may have a more powerful influence on students' thinking and writing practices than a short rubric.

First, students perceived the requirement to assess using the long rubric as helpful when initially drafting and again when producing the final draft, whereas open-ended assessment users were uncertain about whether the requirement to assess was helpful for these purposes. Again, in terms of cognitive theory, this could suggest that the rubric is being used as an external representation of criteria so that working memory is freed up for more effective reflection (Hayes, 2000). The long rubric provides an extensive representation of criteria for writing that students can refer to fluidly as they engage in writing-related problem solving and decision making. In contrast, open-ended assessment, which requires identifying strengths and weaknesses of one's paper after composing, offers no representation of criteria while composing that could facilitate reflection.

Second, long rubric users believed more strongly than open-ended assessment users that assessing heightened their awareness of what to do to write a good paper. Also, long rubric users saw assessment with a tool like the long rubric as potentially helpful for writing more generally for other papers in other classes, while open-ended assessment users were uncertain about this possibility. These responses suggest that the long rubric may be used by students to define the rhetorical situation and/or refine their task schema, but, as discussed above, not in a way that is powerful enough to effect writing quality.

In sum, results suggest that use of a rubric shapes writing practices and beliefs in pedagogically desirable ways, but not powerfully enough to produce significant gains in writing performance. In order to realize performance gains on familiar writing tasks, students may need more help with using criteria to problem solve and make decisions in a way that leads to more effective writing and revising.

Analyses of survey responses across conditions suggest that there are benefits of requiring college students to formally assess their own writing, regardless of the kind of assessment tool they use. First, the majority of students believed that with a requirement to self-assess they worked harder on the writing assignment than they otherwise would have. Second, a majority of students believed that assessing their rough draft made them aware of needed revisions and helped them write a better final paper. Whereas long rubric users believed they were helped by the self-assessment requirement when composing the first and final draft, a majority of students believe that self-assessment of any kind is helpful for effective revisions. Finally, most students who were required to formally assess their own writing believed that assessment made them set specific goals for writing the paper.

Note that while many students believed they worked harder with the requirement to assess, they did not believe that the requirement reduced their anxiety about succeeding on the assignment. Rubric users were no less anxious about success on the writing task than non-rubric users. This finding conflicts with the finding by Andrade and Du (2005) that college student writers who use rubrics report reduced anxiety.

Educational Implications
Students should be required to perform some kind of self-assessment of their rough drafts whenever they are completing a formal writing assignment. Many students in this study believe that being required to assess their own writing causes them to work harder, to set goals for their writing, and to write a better final draft than they otherwise would have. While providing students with a rubric for assessing their own writing may not immediately translate into higher quality writing by students, especially when the purposes for writing are familiar, giving students a long (i.e., comprehensive) rubric appears to encourage writing practices and beliefs that could move students toward greater writing proficiency consistent with a cognitive model of writing.

Limitations
Results of this study should be viewed with caution because of the relatively small sample size. A small sample size limits power, which may reduce the probability of uncovering significant differences.

Another reason to be cautious about these results is that generalizability may be limited for several reasons. First, these students are of average proficiency. Results may be different for basic writers. Second, this study tested writing in only one genre. Perhaps rubric use would boost college students' self-efficacy and writing effectiveness for a writing assignment that is more complex or less familiar. For a more complex or less familiar writing assignment, college students may be better able to capitalize on the provision of appropriate writing criteria. Third, recall that the sample in this study was predominantly female. Using a more gender-balanced sample could alter the results.

Future Research
In this study, social cognitive theory and a cognitive theory of writing were used as a framework for understanding whether and how an instructional writing rubric affects college writers' self-efficacy for writing and their writing performance. This may be a useful approach for additional studies; the present study should be considered a preliminary effort in this area. Future research could improve upon this work in a number of ways.

First, it may be helpful to conduct a study in which the achievement, self-efficacy, and practices of rubric users are compared with students who are not required to assess their own writing. Use of an empty control group may be more powerful for revealing effects of rubric use. Second, future researchers should examine how the context for writing, i.e., the genre and purpose, relates to rubric use and self-efficacy for writing. In particular, the effects of rubric use on self-efficacy and writing quality might be powerful when students are confronted with an unfamiliar genre or purpose for writing. Finally, it may be helpful to identify college students who have had little or no prior experience with rubrics to explore whether, in fact, prior experience with rubrics explains college students' knowledge of relevant goals for their writing and the consequent failure of rubrics to significantly effect writing quality and self-efficacy.


Correspondence concerning this article should be addressed to Amy E. Covill, Department of Psychology, Bloomsburg University, 2121 McCormick Center, 400 East Second Street, Bloomsburg, Pennsylvania 17815. Additional contact information: phone (570) 389-4990, fax (570) 389-2019, and e-mail acovill@bloomu.edu.

The author thanks Jessica Smith for her assistance with some of the analyses for this article.


References



Andrade, H. G. (2000). Using rubrics to promote thinking and learning. Educational Leadership, 57, 13-18.

Andrade, H. G. (2001). The effects of instructional rubrics on learning to write. Current Issues in Education [On-line], 4. Retrieved from: http://cie.ed.asu.edu/volume4/number4

Andrade, H. G. & Boulay, B. A. (2003). Role of rubric-referenced self-assessment in learning to write. The Journal of Educational Research, 97, 21-34.

Andrade, H. G. & Du, Y. (2005). Student perspectives on rubric-referenced assessment. Practical Assessment, Research, and Evaluation, 10, 1-11.

Andrade, H. G., Wang, X., Du, Y., & Akawi, R. L. (2009). Rubric-referenced self-assessment and self-efficacy for writing. The Journal of Educational Research, 102, 287-301.

Arter, J. & Chappuis, J. (2007). Creating and recognizing quality rubrics. Upper Saddle River, NJ: Pearson Education, Inc.

Bandura, A. (1986). Social foundations of thought and action: A social cognitive theory. Englewood Cliffs, NJ: Prentice-Hall, Inc.

Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behavior (pp. 71-81). San Diego, CA: Academic Press.

Broad, B. (2003). What we really value: Beyond rubrics in teaching and assessing writing. Logan, Utah: Utah State University Press.

Broad, B. (2000). Pulling your hair out: Crises of standardization in communal writing assessment. Research in the Teaching of English, 35, 213-260.

Brookhart, S. M. (2004). Assessment theory for college classrooms. New Directions for Teaching and Learning, 2004, 5-14.

Covill, A. E. (2010a, May). Literacy practices, beliefs, and attitudes of college students. Paper presented as part of roundtable discussion conducted at the annual meeting of the American Educational Research Association, Denver, CO.

Covill, A. E. (2010b). Comparing peer review and self-review as ways to improve college students' writing. Journal of Literacy Research, 42, 199-226.

Deane, P., Odendahl, N., Quinlan, T., Fowles, M., Welsh, C., & Bivens-Tatum, J. (2008). Cognitive models of writing: Writing proficiency as a complex integrated skill. Princeton, NJ: Educational Testing Service.

Flower, L. (1994). The construction of negotiated meaning: A social cognitive theory of writing. Carbondale, IL: Southern Illinois University Press.

Flower, L., Hayes, J. R., Carey, L., Schriver, K., & Stratman, J. (1986). Detection, diagnosis, and the strategies of revision. College Composition and Communication, 37, 16-55.

Freedman, S. W. (1985). Introduction: Acquiring written language. In S. W. Freedman (Ed.), The acquisition of written language: Response and revision (pp. x-xv). Norwood, NJ: Ablex Publishing Corporation.

Hayes, J. R. (2000). A new framework for understanding cognition and affect in writing. In R. Indrisano & J. R. Squire (Eds.), Perspectives on writing: Research, theory, and practice (pp. 6-44). Newark, DE: International Reading Association.

Hayes, J. R., & Flower, L. S. (1980). Identifying the organization of writing processes. In L. Gregg and & E. R. Steinberg (Eds.), Cognitive processes in writing (pp. 3-30). Hillsdale, NJ: Lawrence Erlbaum Associates.

Hidi, S. & Boscolo, P. (2006). Motivation and writing. In C. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 144-157). New York, NY: Guilford Press.

Holliway, D. R., & McCutchen, D. (2004). Audience perspective in young writers' composing and revising: Reading as the reader. In L. Allal, L. Chanquoy, & P. Largy (Eds.), Revision: Cognitive and instructional processes (pp. 87-101). Boston, MA: Kluwer Academic Publishers.

Huot, B. (1990). Reliability, validity, and holistic scoring: What we know and what we need to know. College Composition and Communication, 41, 201-213.

Huot, B., O'Neill, P., & Moore, C. (2010). A usable past for writing assessment. College English, 72, 495-517.

Klassen, R. & Welton, C. (2009). Self-efficacy and procrastination in the writing of students with learning disabilities. In G. A. Troia (Ed.), Instruction and assessment for struggling writers: Evidence-based practices (pp. 51-74). New York, NY: Guilford Press.

MacArthur, C. A. (2007). Best practices in teaching evaluation and revision. In S. Graham, C. A. MacArthur, & J. Fitzgerald (Eds.), Best practices in writing instruction (pp. 141-162). New York, NY: Guilford Press.

McCarthy, P., Meier, S., and Rinderer, R. (1985). Self-efficacy and writing: A different view of self-evaluation. College Composition and Communication, 36, 465-470.

McCutchen, D. (2006). Cognitive factors in the development of children's writing. In C. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 115-134). New York, NY: Guilford Press.

Morozov, A. (2011). Student attitudes toward the assessment criteria in writing-intensive college courses. Assessing Writing, 16, 6-31.

Newell, G. E. (2006). Writing to learn: How alternative theories of school writing account for student performance. In C. MacArthur, S. Graham, & J. Fitzgerald (Eds.), Handbook of writing research (pp. 235-247). New York, NY: Guilford Press.

Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading and Writing Quarterly, 19, 139-158.

Pajares, F., Hartley, J., & Valiante, G. (2001). Response format in writing self-efficacy assessment: Greater discrimination increases prediction. Measurement and Evaluation in Counseling and Development, 33, 214-221.

Pajares, F. & Valiante, G. (1997). Influence of self-efficacy on elementary students' writing. The Journal of Educational Research, 90, 353-360.

Paris, S. G. & Paris, A. H. (2001). Classroom applications of research on self-regulated learning. Educational Psychologist, 36, 89-101.

Popham, W. J. (1997). What's wrong--and what's right--with rubrics. Educational Leadership, 55, 72-75.

Quinlan, A. M. (2006). A complete guide to rubrics: Assessment made easy for teachers, K-college. Lanham, MD: Rowman & Littlefield Education.

Rawson, R. E., Quinlan, K. M., Cooper, B. J., Fewtrell, C., & Matlow, J. R. (2005). Writing-skills development in the health professions. Teaching and Learning in Medicine, 17, 233-239.

Schirmer, B. R. & Bailey, J. (2000). Writing assessment rubric. Teaching Exceptional Children, 33, 52-58.

Shell, D. F., Murphy, C. C., and Bruning, R. H. (1989). Journal of Educational Psychology, 81, 91-100.

Vacca, R. T. & Vacca, J. L. (2000). Writing across the curriculum. In R. Indrisano & J. R. Squire (Eds.), Perspectives on writing: Research, theory, and practice (pp. 214-232). Newark, DE: International Reading Association.

White, E. M. (1988). Teaching and assessing writing. San Francisco, CA: Jossey-Bass. Whitcomb, R. (1999). Writing rubrics for the music classroom. Music Educators Journal, 85, 26-33.


Appendix A

Early Child Development
Paper Guidelines

The purpose of this 5-page paper is to use what you have learned about child development to analyze why/how a particular children's book is age-appropriate.

You will locate a book (at the library or at a bookstore) that has been written for the age group you have been assigned to explore. The intended age range for a book may be noted on the back cover. If the age range is not given, you may consult the bookseller, a librarian, or on-line sources. You do not need to have your book approved by me.

Review information in the textbook and in your notes that apply to the age group you have been assigned. Think about what you have learned about cognitive, motor, personality, emotional, and language development. Consider what we know from theories and research. In your paper, you will need to explain IN YOUR OWN WORDS* the relevant information about development, and apply that to your analysis of why/how the book is age appropriate. In the children's book consider the language or story, the illustrations, and (if relevant) the book construction (cardboard, plastic, pop-ups, etc.). It is hoped that writing this paper will lead you to deepen your understanding of the research and theories of development covered in this class.

Remember, the goal is to show what you know about development, and demonstrate your analytical ability as you analyze the contents of the children's book. Your goal is to explain why the book is suitable for the age of the intended reader. Some students get off track analyzing how the characters in the book act appropriately for their age. You should be thinking about the developmental characteristics of the reader, not the developmental level of the characters in the book.

You will need to hand in the children's book with your paper. If you are using a library book, it is your responsibility to renew the book, if necessary. The paper and book will be returned to you.

If you use a source of information other than the textbook and the children's book you are analyzing, make sure you cite that source on a separate bibliographic page at the end of your paper.

Avoid typos, slang, and an informal style.

*Remember, if you take words from the textbook (or other source) and include those words in your paper without using quotation marks, that is plagiarism and you will get a zero on your paper. Saying you did not understand what plagiarism is will not suffice. See me, and/or the information handed out the first day (with the syllabus), for more information on plagiarism.


Appendix B

Self-Assessment Tool for Required Paper
(Criteria in bold are weighted mostly heavily)

"A" paper"B" paper"C-D" paper"D-E" paper
Choice of bookAge appropriate, and an appropriate choice for conducting an in-depth analysisAge appropriate, but book choice allows for only a limited analysisBook is marginally age appropriate, and allows for only a limited analysisBook is not age appropriate, and/or it offers no real opportunity for analysis
Understanding of course materialPaper reflects a good understanding of theories and/or research covered in this class Paper reflects some understanding of theories and/or research covered in this classPaper reflects a misunderstanding of theories and/or research covered in this classPaper includes little or no information covered in this class
Quality of analysisAnalysis of the content of the children's book is thorough and relevant to course material Analysis of the content of the children's book is somewhat thorough and relevant to course material Analysis of the content of the children's book is minimal and/or connections to course material is minimalNo real analysis is reflected in the paper
OrganizationThere is a logic to the order of the main ideas in the paper Order of the main ideas reads like a listingOrdering of the main ideas makes the paper somewhat hard to followLack of organization makes the paper quite confusing and hard to follow
Sentence clarityAll of the sentences are clearly writtenOne or 2 sentences are unclearThree to 5 sentences are unintelligibleMore than 5 sentences are unintelligible
Voice and tonePaper has a formal tone. Avoids use of slang phrases, and contractions (e.g., uses "does not," rather than "doesn't"). Does not include discussion of personal opinions and experiencesThere are 1-2 violations of appropriate voice and toneThere are 2-4 violations of appropriate voice and toneThere are more than 4 violations of voice and tone
Paper componentsPaper includes an introductory paragraph and a concluding paragraph that summarizes the analysisPaper has an introduction and conclusion, but one of the two is too detailed or uninformativePaper is missing either an introduction or a conclusionPaper has no introduction and no conclusion
ParagraphingParagraphs are used correctly throughout the paper: one main idea with supporting ideasTwo or 3 paragraphs have two main ideas, no main idea, or no supporting ideasMore than 3 paragraphs have two or more main ideas or no main ideasParagraphs are rarely or never used in the paper
Mechanics (grammar, punctuation, spelling, word choice)The paper has 3 or fewer mechanical errorsThe paper has more than 3 errors, but they are not a huge distractionThe paper has so many errors that it is hard to focus on the meaning in the paperThere are significant typos and grammatical problems that make the paper difficult to understand
Spacing Left and right margins are the standard size and entire paper is double-spaced. There are no extra spaces between paragraphs or at page breaksThere are 1-2 two spacing errorsThere are 3-4 spacing errorsThere are more than 4 spacing errors
Page number requirementPaper is 5 or more full pagesPaper is 4-5 pagesPaper is 3-4 pagesPaper is 3 pages or less



Appendix C

Self-Assessment Tool for Required Paper
(Criteria in bold are weighted most heavily.)

"A" paper"B" paper"C-D" paper"D-E" paper
Choice of bookAge appropriate, and an appropriate choice for conducting an in-depth analysisAge appropriate, but book choice allows for only a limited analysisBook is marginally age appropriate, and allows for only a limited analysisBook is not age appropriate, and/or it offers no real opportunity for analysis
Understanding of course materialPaper reflects a good understanding of theories and/or research covered in this class Paper reflects some understanding of theories and/or research covered in this classPaper reflects a misunderstanding of theories and/or research covered in this classPaper includes little or no information covered in this class
Quality of analysisAnalysis of the content of the children's book is thorough and relevant to course material Analysis of the content of the children's book is somewhat thorough and relevant to course material Analysis of the content of the children's book is minimal and/or connections to course material is minimalNo real analysis is reflected in the paper
Overall clarity of writing/ organizationPaper is clearly written and is logically organizedPaper is clearly written but reads like a listingThere are problems with clarity and organizationThe paper is quite confusing and hard to follow
Mechanics (grammar, punctuation, spelling, word choice)The paper has few or no errorsThe paper has more than a few errors, but they are not a huge distractionThe paper has so many errors that it is hard to focus on the meaning in the paperThere are significant typos and grammatical problems that make the paper difficult to understand



Appendix D

Writing Beliefs Survey
Instructions
Think about the paper you are required to write for this class (the analysis of a children's book). How confident are you that you will accomplish each of the following tasks for this paper? Please rate your confidence level for each task on a scale of 0 (cannot do it) to 100 (completely sure I can do it). You may use any number from 0 through 100.

Cannot Do this Moderately Sure I can do this Completely Sure I can do this
0-------10-------20-------30-------40-------50-------60-------70-------80-------90-------100 _______ 1. Write a clear, focused paper that stays on topic
_______ 2. Use details to support my ideas
_______ 3. Write a well-organized paper with a good beginning, developed middle, and meaningful ending
_______ 4. Correctly use paragraph format in the paper
_______ 5. Write with an appropriate voice and tone
_______ 6. Use words that are effective in the paper
_______ 7. Write with concise, clear sentences that "flow" together in the paper
_______ 8. Use correct grammar in the paper
_______ 9. Correctly spell all words in the paper
_______ 10. Correctly use punctuation in the paper
_______ 11. Write the required number of pages
_______ 12. Write a paper good enough to earn a high grade


Appendix E

Use of Self-Assessment in Writing
Circle the answer below each statement or question that best reflects your thinking.
1. The self-assessment I had to complete for the rough draft of my paper helped me to write a better rough draft.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

2. The self-assessment I had to complete for the rough draft of my paper helped me to write a better rough draft.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

3. The self-assessment I had to complete for the final draft of my paper helped me to write a better final draft.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

4. In general, the kind of self-assessment I had to do in this class might be helpful to do for other papers in other classes.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

5. Having to assess my own writing made me work harder on the paper than I otherwise would have.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

6. Having to assess my own writing made me more aware of what I needed to do to revise my rough draft of the paper.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

7. Having to assess my own writing made me more aware of what I needed to do to write a good paper.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

8. Having to assess my own writing forced me to set specific goals for my writing of this paper.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

9. Having to assess my own writing reduced my anxiety about successfully writing the paper.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

10. Think about your high school and college experiences. How often have your teachers given you a rubric to use when you have a writing assignment?
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

The following questions should be answered only by students who had to use a rubric (list of criteria with descriptions) for the writing assignment in this class. (If you were asked to list 3 of the strongest and weakest aspects of your paper, you do not need to continue answering questions on this survey.)

11. I referred to the rubric when I was making plans to write the paper.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

12. I referred to the rubric when I was actually writing the rough draft of this paper.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

13. I referred to the rubric when I had to make my rough draft into a final draft (in other words, when I was revising).
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree

14. By the time I turned in my final draft, I had read the entire rubric.
Strongly Disagree Disagree Maybe/Not Sure Agree Strongly Agree



Appendix F

"Use of Self-Assessment in Writing" Survey Results (Collapsing Across Conditions)

Survey ItemMean Responses (SD)Percentage for each response
1. The self-assessment I had to complete for the rough draft of my paper helped me to write a better rough draft.3.46 (.99) SD=4
D=14
Maybe/Not Sure=25
A=46
SA=11
2. The self-assessment I had to complete for the rough draft of my paper helped me to write a better final draft.3.77 (.99)SD=2
D= 11
Maybe/Not Sure=20
A=45
SA=23
3. The self-assessment I had to complete for the final draft of my paper helped me to write a better final draft.3.36 (.98)SD=0
D=27
Maybe/Not Sure=20
A=45
SA=9
4. In general, the kind of self-assessment I had to do in this class might be helpful to do for other papers in other classes.3.79 (.89)SD=0
D=11
Maybe/Not Sure=20
A=50
SA=20
5. Having to assess my own writing made me work harder on the paper than I otherwise would have.3.70 (.85)SD=0
D=11
Maybe/Not Sure=23
A=52
SA=14
6. Having to assess my own writing made me more aware of what I needed to do to revise my rough draft of my paper.4.04 (.76)SD=0
D=5
Maybe/Not Sure=11
A=59
SA=25
7. Having to assess my own writing made me more aware of what I needed to do to write a good paper.4.05 (.72)SD=0
D=4
Maybe/Not Sure=12
A=59
SA=25
8. Having to assess my own writing forced me to set specific goals for my writing of this paper.3.84 (.85)SD=2
D=7
Maybe/Not Sure=13
A=63
SA=16
9. Having to assess my own writing reduced my anxiety about successfully writing the paper.3.18 (.90)SD=2
D=19
Maybe/Not Sure=45
A=27
SA=7
10. Think about your high school and college experiences. How often have your teachers given you a rubric to use when you have a writing assignment?3.66 (.96)Never=2
Rarely=11
Sometimes=25
Usually=45
Always=18
11. I referred to the rubric when I was making plans to write the paper.3.72 (1.00)SD=0
D=22
A=61
SA=17
12. I referred to the rubric when I was actually writing the rough draft of this paper.3.81 (1.06)SD=0
D=22
A=53
SA=25
13. I referred to the rubric when I had to make my rough draft into a final draft (in other words, when I was revising).4.28 (.74)SD=0
D=6
A=56
SA=39
14. By the time I turned in my final draft, I had read the entire rubric.4.58 (.50)SD=0
D=0
A=42
SA=58

Note. For items 1-9 (n=56), Strongly Disagree=1, Disagree=2, Maybe/Not Sure=3, Agree=4, Strongly Agree=5. For item 10 (n=56), Never=1, Rarely=2, Sometimes=3, Usually=4, and Always=5. For items 11-14 (n=36), Strongly Disagree=1, Disagree=2, Agree=4, and Strongly Agree=5. Items 11-14 were only offered to participants in two of the three assessment conditions: The Long Rubric and Short Rubric conditions. Rounding caused the percentage total for some items to be greater than 100%.