The Empirical Development of an Instrument to Measure Writerly Self-Efficacy in Writing Centers
by Katherine M. Schmidt and Joel E. Alexander
Post-secondary writing centers have struggled to produce substantial, credible, and sustainable evidence of their impact in the educational environment. The objective of this study was to develop a college-level writing self-efficacy scale that can be used across repeated sessions in a writing center, as self-efficacy has been identified as an important construct underlying successful writing and cognitive development. A 20-item instrument (PSWSES) was developed to evaluate writerly self-efficacy. 505 university students participated in the study. Results indicate that the PSWSES has high internal consistency and reliability across items and construct validity, which was supported through a correlation between tutor perceptions of client writerly self-efficacy and client self-ratings. Factor analysis revealed three factors: local and global writing process knowledge, physical reaction, and time/effort. Additionally, across repeated sessions, the clients' PSWSES scores appropriately showed an increase in overall writerly self-efficacy. Ultimately, this study offers a new paradigm for conceptualizing the daily work in which writing centers engage, and the PSWSES offers writing centers a meaningful quantitative program assessment avenue by (1) redirecting focus from actual competence indicators to perceived competence development and (2) allowing for replication, causality, and sustainability for program improvement.
Key Words: Self-efficacy, Writing, Cognition, Perceived Competence Development, Writing Center, Assessment, Student Learning, Post Secondary
Empirical research is the next frontier in writing center work. Quantitative methods provide reliable, replicable, and cost-effective methods for assessing writing center program objectives and actual practices. Additionally, quantitative work complements case studies, ethnographies, and phenomenological narratives well and encourages cross-disciplinary research partnerships.
With the goal of creating an instrument to measure writing-support services, the writing center partnered with the neurocognitive laboratory on campus. The relationship engendered a paradigm shift for conceptualizing program improvement and the possibilities for replicable and sustainable measurement methods in the writing center. Additionally, this article recognizes the need for writing centers to view their work with writers differently: namely, there needs to be a shift from overt performance to internal constructs as writing centers track the progress of the developing writer. One important internal construct that needs further exploration is writerly self-efficacy.
Executive Skills and Cognitive Factors
While there has been much debate over the last three decades regarding the mission of writing centers (North, 1984; Gardner and Ramsey, 2005; McKinney, 2005), the fields of educational psychology and cognitive science offer a new lens through which to see the work writing centers do. Researchers across disciplines (e.g., Banduras, 2001; Pajares, 1996, 1997, 2003, 2007; Zimmerman, 2000) argue that performance, or overt behavior, is not a sufficient measure of a person's academic capabilities; instead, executive skills and cognitive factors that internally define the student-writer's abilities serve as the foundation for external factors, like performance. Thus, a primary factor that enables writing centers to forge better writing and better writers is this: Writing centers are increasing student-writers' beliefs about what and how they can perform as writers, which is being introduced in this study as writerly self-efficacy.
Self-efficacy is an individual's belief of being capable of performing necessary behavior to perform a task successfully (Bandura, 2001). In contrast to self-esteem, self-efficacy can be subject- or performance-area specific (McLeod, 1987). For example, an athlete would have targeted self-efficacy related to her specific sport, which may increase her self-esteem as an athlete in general. Similarly, a student could have targeted self-efficacy related to writing, which may increase her self-esteem as a student in general. While self-efficacy does, in part, influence performance, the change in performance may be delayed or affected by other internal construct variables (e.g., motivation, emotional disposition, life stress) canceling the self-efficacy effect.
The key here is that self-efficacy--whether in a sport or in writing--is an internalized construct which can be learned and developed over time through a synthesis of consistent self-evaluation, coaching, and repeated practice. Like the athlete who needs to possess a forward-thinking, growth-related, can-do perspective on future performance, the student-writer needs the same: Broadening her perspective beyond performance evaluation to include more positive beliefs about her future writing and academic performance will affect her decisions, objectives, affect, assiduity, endurance, and perseverance as a student (Gist & Mitchell, 1992, as cited in Lavelle, 2009).
The writing center is a prime example of a site where academic coaching, self-evaluation, and repeated practice in action are inherent. Directly identifying, challenging, and altering inaccurate self-judgments as they relate to writing are essential to the adaptive functioning and academic success of the majority of students that writing centers serve. A student's perceptions of her own competence, as opposed to her actual competence, often more accurately predicts her success in school contexts (Hackett & Betz, 1989; Pajares, 2003). Thus, writing centers can redirect assessment focus from actual competence indicators to perceived competence development.
Mission Determines Assessment
One unintended effect of the common performance-driven mission of writing centers is its ability to minimize the writing center's assessment of services and outcomes (Thompson, 2006). Thus, writing centers have struggled to produce substantial and credible evidence of their impact in the educational environment (Lerner, 1997). Data gathering methods have typically included the compilation of client demographics, grades, usage, and feedback and the qualitative and quantitative assessment of programs (Bell, 2000; Thompson, 2006).
While qualitative methods are a hallmark of writing center research and often include self-reported client satisfaction and confidence levels, they are difficult to quantify. Quantitative methods have the power to provide the most convincing data-based evidence of educational impact and the most productive data to inform continual program improvement (Lerner, 1997); however, quantitative methods have primarily focused on demographics and the relationship between writing centers and performance. The improvement of writing ability and/or skills, as evidenced by progress between drafts or by competent to exceptional course grades, are often the result of multiple constituents. Thus, the attempt to assess a writing center's direct contribution to behavioral constructs, such as performance changes, is arguably impossible without the institution of unnatural controls. Even in this case, the endeavor would be a colossal one and inherently limits replication, causality, and sustainability, which are key factors in professionalizing a discipline.
An alternative to this dilemma resides within program objectives. When a writing center's mission becomes one that is exclusive to the writing center, more direct assessment becomes possible. While a writing center makes contributions to a student-writer's performance changes, a writing center, more directly, maximizes the individual and professional development of a student-writer by offering personalized learning opportunities to advance writerly self-efficacy over time. Self-efficacy offers a quantitative assessment avenue that is replicable, causal, and sustainable in writing centers and, as a longitudinal measure, is exclusive to writing centers.
A number of scales assessing writing self-efficacy have been developed and used across the last few decades. For the purpose of this article, the most commonly cited scales are reviewed with emphasis on the limitations as they pertain to the work of writing centers. As evidenced by Bandura (2001), self-efficacy development is influenced by four main sources: Mastery experiences, vicarious learning, reduction in stress reaction and negative emotions, and social persuasion. Three of these four sources can be directly addressed with regard to the nature of existing writing self-efficacy scales: Mastery experiences, the reduction in stress reaction and negative emotions, and social persuasion. Several scales (McCarthy et al., 1985; Shell et al.,1989; Lavelle, 1993; Shell et al., 1995; Pajares, 1999) focus on mastery experience through a demonstration of skills and/or self perception of skills. Examples include the following:
• "Can you write sentences in which the subjects and verbs are in agreement?" (McCarthy et al., 1985);
• "Correctly use plurals, verb tenses, and suffixes" (Shell et al., 1989; composition skills subscale);
• "I can write simple, compound and complex sentences" (Lavelle, 1993);
• "Correctly punctuate a sentence" (Shell et al., 1995; task subscale);
• "Correctly spell all words in a one-page story or composition" (Pajares, 2007).
With regard to reduction in stress reaction and negative emotions, several scales (Daly & Miller, 1975; Lavelle, 1993; White & Bruning, 2005; Piazza & Siebert, 2008) include items that cue the writer into negative thought patterns and negative self-evaluation. Examples include the following:
• "I avoid writing" (Daly & Miller, 1975);
• "My writing rarely expresses what I think" (Lavelle, 1993);
• "One of my writing goals is to have to make as few changes as possible" (White & Bruning, 2005);
• "I am not a good writer" (Piazza & Siebert, 2008).
In addition, another concern in existing scales involves social persuasion. White & Bruning (2005) offer several items that present a discipline-specific writing value as a writing-across-the-curriculum value. For example, "I try to write in a way that is as distinctive as possible." While English faculty tend to value a subjective "voice" in writing, social and natural sciences faculty value an objective written register. Problematically, the scale's statement implies that subjective voice is a universal writing objective. Shell et al. (1989) and Shell et al. (1995) offer a genre-specific listing of professional, academic, and personal products. Examples from both scales include the following:
• You can "Compose a will or other legal document";
• You can "Author a 400 page novel" (Shell et al., 1989; task subscale);
• You can "Write a story [on] what you did on summer vacation" (Shell et al., 1995; writing subscale).
These statements focus on products without regard for process, which does not allow for an adequate assessment of writing self-efficacy.
Need and Development of a New Scale
The nature of existing scales and their items are not suitably designed for transference into college writing centers, as the scales are often limited in focus, instruments for one-time and/or short-term evaluation, and linked to products. Additionally, questions tend to be unclearly delineated as to whether the disposition is past/current (e.g., I avoid writing) as opposed to a future ability belief (e.g., I can articulate my strengths and challenges as a writer).
Writing centers are in a unique position with regard to their abilities to interact with college writers over periods of time. Writing centers provide services year round, and they work with individual student-writers across disciplines and through the matriculation of their college experiences. Writing centers are an appropriate site and target for tracking the general development of writing self-efficacy. Therefore, the goal of a new scale should focus on the unique feature of writing centers: The opportunity for longitudinal writing development tracking and assessment. Additionally, a new scale for writing centers needs to allow for replication, causality, and sustainability.
Writerly Self-Efficacy Scale For Writing Centers
Over the course of ten weeks, peer consultants and professional staff in the writing center participated in focus groups discussing the mission statement and objectives of the unit, which ultimately resulted in the identification of five features of writerly processes that would direct the development of the scale: The ability to read and respond like a writer; rhetorical awareness/writing to communicate/research integration; awareness of personal writing strengths and challenges; the management of personal writing processes; and the ability to be affected by modeling. Professional writing center staff then collaborated with Psychology faculty to create a comprehensive list of items that bridged the four sources of self-efficacy and the five features of writerly processes. In-house focus groups were reconvened to concentrate and articulate items for use in the scale.
Given the theoretical and empirical research on both self-efficacy and writing, the Post-Secondary Writerly Self-Efficacy Scale (PSWSES) includes a total of 20 items, each of which was designed by directly applying at least one of the four sources of self-efficacy--mastery experiences, vicarious learning, reduction in stress reaction and negative emotions, and social persuasion (Bandura, 2001)--to writerly processes. While existing scales tend to be lengthy, PSWSES is concise to promote repeated participation and more accurate reflection.
PSWSES focuses more on self-efficacy beliefs related to writing than on confidence in writing skills because the focus is, and should remain, on assessing the internal construct and not the skill, which varies uncontrollably from assignment to assignment and from discipline to discipline. PSWSES assesses writerly factors, as opposed to writing factors. The suffix -ly serves as an adjective meaning "-like," and the word writerly means of, relating to, or characteristic of a writer. The word writing, on the other hand, can be used as a noun referring to a written product or as a verb referring to the process of inscribing, communicating, composing, and expressing. Thus, the word writing ultimately limits the nature of evaluation to behavioral manifestations, whereas the word writerly places cognitive factors as ordinate.
Because self-efficacy can be both a current and/or future belief disposition, the PSWSES phrases all items in "I can" statements, which requires participants to evaluate their beliefs in their future abilities as writers as opposed to evaluating or demonstrating their current skill sets. Additionally, unlike many questions/statements in existing instruments, none of the PSWSES statements require negative self-evaluation scores. PSWSES utilizes a writing across the curriculum/writing in the disciplines approach (i.e., no discipline is valued over another). The instrument utilizes a 0-100 scale in order to delineate the self-ratings more accurately by avoiding limitations of range (Bandura, 2001).
Based on the proposition for a scale that utilized positive self-ratings with a writerly focus, it was hypothesized that the PSWSES would have high internal reliability and consistency. Additionally, it was hypothesized that clients would show a greater increase in overall writerly self-efficacy ratings across multiple tutoring sessions on the PSWSES, compared to control subjects who were not engaged in tutoring with the writing center. Moreover, it was hypothesized that a comparison between client writerly self-efficacy ratings would correlate positively with tutor perceptions of client writerly self-efficacy ratings.
A total of 505 university students participated in the study as clients of the university Writing Center. Participants are best described as being representative of the range of writing center clientele: Class standings span freshmen to graduate levels; their written assignments are from various levels and across disciplines; and visits are self-sponsored, required, or extra credit. There exists only one incontrovertible common denominator among the 505 participants: They have used writing center services at least three times during a period of ten weeks. An additional 39 subjects from the campus who were not clients of the writing center for a period of ten weeks served as control subjects for a three-session analysis. The control group consisted of students who had previously completed the general education writing requirements and were currently enrolled in courses that were not writing intensive. Both experimental and control groups were intentionally allowed to have high variability in the levels of current exposure to writing in the academic setting. This allows for a more natural assessment of writing center clients and non-clients. The study was approved by an Institutional Review Board for human subject research. Participants' responses were coded and confidential.
At the conclusion of writing center tutoring sessions, student-writer clients were asked to complete the PSWSES. Informed consent was verbal, and it was clearly indicated that completion of the scale was voluntary. In addition, tutors evaluated 180 of these clients on a scale of 0-100 as to their perceptions of the clients' overall writerly self-efficacy. Clients completing at least three writing center sessions in one term also served as a subset for a three-session analysis in comparison with a control group of students who were not clients of the writing center. The control group was given the PSWSES three times within one term (i.e., first, fifth, and tenth weeks). Client and control subject overall writerly self-efficacy rating was obtained by using the average of items 1-19 (item 20 was not included because it referred to the interaction with a tutor, which the control group did not experience).
Writerly self-efficacy was evaluated using the 20-item PSWSES, which was developed following the guidelines on self-efficacy scale construction and response format suggested by Bandura (2001). All items were positively oriented using "I can" statements. All items utilized a 0-100 response format regarding agreement to the "I can" statements. Along with directions, an example item was provided to model the response format for the client. Each participant provided a nine-digit student identification number so that writing center staff could access the data and monitor progress over time. Clients were informed that they could request the results of their tracked development from the writing center.
The mean overall PSWSES ratings for clients and control subjects are displayed in Figure 1. Individual overall PSWSES ratings for clients and for tutors' perceptions of clients are displayed in Figure 2. All analyses of variance for repeated measure factors employed Greenhouse-Geisser corrections to the degrees of freedom. Only probability values for the corrected d.f. are reported. A posteriori analyses were carried out using paired T-tests for within-group effects using Holm's sequential Bonferroni method to control for Type I errors across multiple T-tests. The significance level for all a posteriori tests was set at p<.05. Only effects related to the test condition are reported below.
Reliability and consistency
The Cronbach's Alpha for the scale was .931 and the split-half reliability was .864 (Guttman Split-Half coeffieicent = .927), showing high internal consistency and reliability across items. Overall, control group subjects rated their writerly self-efficacy higher than the clients, F(1,98)=10.14, p<.01, ηp2=.09. Additional test-retest repeated measures analysis showed that clients completing three tutoring sessions showed a significant, consistent increase in overall writerly self-efficacy rating, whereas the control group did not show a significant change across three PSWSES evaluations, F(2,196)=6.16, p<.01, ηp2=.06.
The 20 items in PSWSES were analyzed using principle component analysis and varimax rotations with Kaiser normalization (i.e., all 20 items were used and factors were identified that emerged from these 20 items). The analysis was conducted in order to identify factors that contribute to the model, but also to confirm previously identified factors desired to be represented by the instrument. The analysis revealed three factors with eigenvalues greater than one; these three factors explained 56% of the variance. The resulting factors and corresponding items were evaluated by a panel of writing experts for identification. The primary factor, which had an eigenvalue of 5.2 and explained 26% of the variance, was identified as local and global writing process knowledge. For this factor, eleven items were representative, including items 1, 3, 4, 5, 6, 7, 9, 11, 12, 16, and 18 in Table 1. The second factor, which had an eigenvalue of 3.5 and explained 17.5% of the variance, was identified as physical reaction. For this factor, seven items were representative, including items 8, 10, 13, 14, 15, 17, and 19 in Table 1. The third factor, which had an eigenvalue of 2.6 and explained 12.5% of the variance, was identified as time/effort. For this factor, five items were representative, items 2, 3, 10, 19, and 20. Some items loaded closely (i.e., < .1 difference) into multiple factors (e.g., items 3, 10, 13, 15, 16, 19) in Table 1.
Construct validity was evaluated by correlating client and tutor ratings. There was a significant positive correlation between client overall writerly self-efficacy ratings and tutors' ratings of clients' writerly self-efficacy, r(178)=.503, p<.001, R2= .25.
The objective of the study was to develop a college-level writerly self-efficacy scale that can be used across repeated writing center sessions. While self-efficacy has been identified as an important construct underlying successful writing and cognitive development, previous scales have focused on behavioral factors more than cognitive factors. Additionally, many of the scales have been designed for elementary and secondary applications, as opposed to PSWSES, which is explicitly designed for writing center usage at the college level.
The results of the analysis of the instrument show that the scale has high internal consistency and reliability across items, which supports the hypothesis that the PSWSES would have internal reliability and consistency. Additionally, the significant positive correlation between client overall writerly self-efficacy ratings and tutors' ratings of clients' writerly self-efficacy confirms the hypothesis that a comparison between client self-efficacy ratings would correlate positively with tutor perceptions of client self-efficacy ratings.
Moreover, clients completing three tutoring sessions showed a significant, consistent increase in overall writerly self-efficacy rating, whereas the control group did not show a significant change across three PSWSES evaluations. These results were partially consistent with the hypothesis that the PSWSES would show a greater increase in overall writerly self-efficacy ratings for clients across multiple sessions than for the control subjects; the result that was not consistent with the original hypothesis was the evaluations of the control subjects: Those who were not engaged in tutoring with the writing center remained consistent over time. One would expect a natural increase in the writerly self-efficacy of college students as the term progresses, yet this was not the case. This fact underscores the need for writing center services for all students, from the developmental to the expert student-writer.
Additionally, the control group was comprised of students who were not actively using the writing center; by definition, then, one would assume, on average, they would have a higher starting point with writerly self-efficacy. It is important to note that a ceiling effect was not present for the control group given that there was no overall significant difference in the standard error between the groups, nor was there any violation of homogeneity of variance; thus, participant responses were not attenuated by this appropriate higher starting point for their writerly self-efficacy.
With regard to the PSWSES, of particular interest are the results of the factor analysis, which produced three distinct factors: Local and global writing process knowledge, physical reaction, and time/effort. These factors were determined across multiple evaluators. In reference to the original five writerly processes-related goals that informed the creation of the instrument (the ability to read and respond like a writer; rhetorical awareness/writing to communicate/research integration; awareness of personal writing strengths and challenges; the management of personal writing processes; and the ability to be affected by modeling), the first factor, local and global writing process knowledge fits well with all five goals. The second factor, physical, relates well with the management of personal writing processes. The third factor, time/effort, fits well with the ability to be affected by modeling and the management of personal writing processes.
Of additional interest is the cross-loading of some specific items across factors and how they apply to the initial goals. Item 10, When I have a pressing deadline for a paper, I can manage my time efficiently, cross-loaded between factor 2 (.520) and factor 3 (.469). In evaluation of the item, it fit better with factor 3 (time/effort) as delineated by other items. Item 13, Once I have completed a draft, I can eliminate both small and large sections that are no longer necessary, loaded into factor 1 (.414) and factor 2 (.449). In evaluation of the item, it fits well with factor 1 (local and global writing process knowledge). For item 15, When writing papers for different courses (for example Biology, English, and Philosophy classes), I can adjust my writing to meet the expectations of each discipline, loaded into factor 1 (.426) and factor 2 (.542). In evaluation of the item, it fits well with factor 1 (local and global writing process). Other than these 3 questions, the factor analysis grouped 16 items as expected, validating our starting goals in the development of PSWSES. Item 19, I can invest a great deal of effort and time in writing a paper when I know the paper will not be graded, however, did not load specifically on any given factor and the communality was low; thus, this item appears to be unsuitable for the scale. Upon reflection, a question regarding writing resources is absent; therefore, a future revision of the scale could replace item 19 with a writing-resource oriented question, such as I can find and use resources that help me with my writing.
The three factors are important for several reasons. The first factor, local and global writing process knowledge, includes beliefs in writerly abilities with regard to simple trait analysis components, rhetorical awareness, planning and revising, reading like a writer, awareness of strengths and challenges, and recognizing the value in modeling. Unlike many existing scales, PSWSES places local skills as only one facet among the many that are required to develop the writerly self. The second factor, physical reaction, includes intrapersonal traits that are commonly engendered by the physical act of writing. Unlike many existing scales, the statements are positively oriented, which should be a cornerstone in the assessment of self-efficacy. The third factor, time/effort, includes management and motivation traits that promote writerly development. Similar to factor two, the statements are positively oriented; unlike some existing scales, factor three includes a statement with regard to the value of one-on-one writing assistance, which also promotes suitable usage in writing centers.
Writing centers are often the only academic support units on university campuses that explicitly work to promote writerly self-efficacy throughout the duration of a college student's career. Unlike many of the existing scales that are one-time/short-term and behaviorally oriented, PSWSES is meant to serve as a longitudinal tracking system to record the development of writerly beliefs. Indeed, the results of the analysis across sessions supports the longitudinal utility of the instrument.
Ultimately, writing centers have struggled to produce reliable, replicable, and cost-effective measures for assessing the alignment of program objectives and actual practices. Unlike quantitative assessment measures that currently exist, the PSWSES redirects focus from actual competence indicators to perceived competence development and allows for replication, causality, and sustainability for program improvement.
The nature of PSWSES required our own writing center to reconceptualize the daily work in which we engage, and we have learned to look beyond the paper and the student's writing development to something much larger: The student-writer's beliefs about writerly success in the future. As a result of this study, we learned that our most important job is to cultivate the student-writer's awareness of processes and resources that will promote her writerly success in both college and beyond, and we are able to provide evidence of this cultivation to administration, faculty and staff, and the student-writers themselves.
We envision the PSWSES eventually becoming a subscale of a global writerly scale that incorporates other internal constructs, thereby continuing a redirection in approach from measuring overt performance (writing) as main indicator to internal constructs (writerly) of the developing writer. Future writerly research should focus on measuring additional internal constructs.
The PSWSES is only a beginning. Quantitative work engendered by cross-disciplinary partnerships promises a future that positions writing centers as indispensable units within their campus communities. Writing centers possess the potential to provide evidence of service success in manners that are unique yet professionally recognizable, legitimate, and replicable.
Support: This study was supported by grants from the Western foundation, Western Oregon University faculty development, and Western Oregon University student technology fee committee.
Katherine M. Schmidt, Ph.D., is Writing Center Director and Associate Professor of English at Western Oregon University, and Joel Alexander, Ph.D., is Neurocognitive Laboratory Director and Professor of Psychology at Western Oregon University.
Manuscript correspondence to:
Katherine M. Schmidt, Ph.D.
Writing Center Director/Associate Professor of English
Western Oregon University
Monmouth OR, 97361
Office Telephone: 503-838-8234
Bandura, A. (2006). Guide for creating self-efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (pp. 307-337). Greenwich: Information Age Publishing.
Bell, J. (2000). When hard questions are asked: Evaluating writing centers. Writing Center Journal, 21, 7-28.
Daly, J., & Miller, M. (1975). The empirical development of an instrument to measure writing apprehension. Research in the Teaching of English, 9(3), 242-249.
Gardner, P. & Ramsey, W. (2005). The polyvalent mission of writing centers. The Writing Center Journal, 25(1), 29-42.
Hackett, G., & Betz, N. (1989). An exploration of the mathematics self-efficacy/mathematics performance correspondence. Journal for Research in Mathematics Education, 20, 261-273.
Lavelle, E. (1993). Development and validation of an inventory to assess processes in college composition. British Journal of Educational Psychology, 67(4), 475-482.
Lavelle, E. (2009). Writing through college: self-efficacy and instruction. In R. Beard, D. Myhill, J. Riley, & M. Nystrand (Eds.), The sage handbook of writing development (pp. 415-422). Los Angeles: Sage.
Lerner, N. (1997). Counting beans and making beans count. Writing Lab Newsletter, 22, 1-4. McCarthy, P., Meier, S., & Rinderer, R. (1985). Self-efficacy and writing: A different view of self evaluation. College Composition and Communication, 36, 465-471.
McKinney, J.G. (2005). Leaving home sweet home: Toward critical readings of writing center spaces. The Writing Center Journal, 25(2), 6-19.
McLeod, S. (1987). Some thoughts about feelings: The affective domain and the writing process. College Composition and Communication, 38, 426-435.
North, S. (1984). The idea of a writing center. College English, 46, 433-446.
Pajares, F. (1996) Self-efficacy beliefs in academic settings. Review of Educational Research, 66, 543-578.
Pajares, F. (1997). Current directions in self-efficacy research. In M. Maehr & P. R. Pintrich (Eds.), Advances in motivation and achievement (Vol. 10, pp. 1-49). Greenwich, CT: JAI Press.
Pajares, F. (2003). Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading & Writing Quarterly, 19, 139-158.
Pajares, F. (2007). Empirical Properties of a Scale to Assess Writing Self-Efficacy in School Contexts. Measurement and Evaluation in Counseling and Development, 39, 239-49.
Pajares, F., Hartley, J., & Valiante, G. (2001). Response format in writing self-efficacy assessment: Greater discrimination increases prediction. Measurement and Evaluation in Counseling and Development, 33, 214-221.
Pajares, F., & Johnson, M. (1994). Confidence and competence in writing: The role of self-efficacy, outcome expectancy and apprehension. Research in the Teaching of English, 28, 313-331.
Pajares, F., Johnson, M., & Usher, E. (2007). Sources of writing self-efficacy beliefs of elementary, middle, and high school students. Research in the Teaching of English, 42, 104-120.
Pajares, F., & Schunk, D. (2001). Self beliefs and school success: Self-efficacy, self-concept, and school achievement. In J. Royer (Ed.), The impact of the cognitive revolution on educational psychology (pp. 165-198). Greenwich: Information Age.
Pajares, F., & Valiante, G. (1999). Grade level and gender differences in the writing of self-beliefs of middle school students. Contemporary Educational Psychology, 24, 390-405.
Piazza, C., & Siebert, C. (2008). Development and validation of a writing dispositions scale for elementary and middle school students. The Journal of Educational Research, 101, 275-285.
Shell, D., Colvin, C., & Bruning, R. (1995). Self-efficacy, attribution, and outcome expectancy mechanisms in reading and writing achievement: Grade-Level and achievement-level differences. Journal of Educational Psychology, 87, 386-399.
Shell, D., Murphy, C., & Bruning, R. (1989). Self-efficacy and outcome expectancy mechanisms in reading and writing achievement. Journal of Educational Psychology, 81, 91-100.
Thompson, I. (2006). Writing center assessment: Why and a little how. Writing Center Journal, 26, 33-61.
White, J., & Bruning, R. (2005). Implicit writing beliefs and their relation to writing quality. Contemporary Educational Psychology, 30, 166-189.
Zimmerman, B. (2000). Self-efficacy: An essential motive to learn. Contemporary Educational Psychology, 25, 82-91.
Table 1. Summary of Items and Factor Loadings for Varimax Orthogonal Three-Factor Solution for the PSWSES (N = 505).
|Item||Factor 1||Factor 2||Factor 3||Communality|
|1. I can identify incomplete, or fragment, sentences.||.777||.111||.077||.621|
|2. I can invest a great deal of effort and time in writing a paper when I know the paper will earn a grade.||.329||. 074||. 702||. 607|
|3. I can articulate my strengths and challenges as a writer.||.516||.200||.475||.532|
|4. I can find and incorporate appropriate evidence to support important points in my papers.||. 602||.237||.402||.580|
|5. I can be recognized by others as a strong writer.||.766||.266||.203||.699|
|6. When I read a rough draft, I can identify gaps when they are present in the paper.||.770||.312||.110||.702|
|7. I can maintain a sense of who my audience is as I am writing a paper.||.653||.248||.255||.553|
|8. I can write a paper without feeling physical discomfort (e.g., headaches, stomach-aches, back-aches, insomnia, muscle tension, nausea, and/or crying).||.182||.801||.012||.676|
|9. When I read drafts written by classmates, I can provide them with valuable feedback.||.594||.476||.130||.597|
|10. When I have a pressing deadline for a paper, I can manage my time efficiently.||.217||.520||.469||.538|
|11. I can attribute my success on writing projects to my writing abilities more than to luck or external forces.||.541||.356||.317||.520|
|12. When a student who is similar to me receives praise and/or a good grade on a paper, I know I can write a paper worthy of praise and/or a good grade.||.501||.329||.391||.511|
|13. Once I have completed a draft, I can eliminate both small and large sections that are no longer necessary.||.414||.449||.371||.511|
|14. I can write a paper without experiencing overwhelming feelings of fear or distress.||.356||.703||.135||.640|
|15. When writing papers for different courses (for example, Biology, English, and Philosophy classes), I can adjust my writing to meet the expectations of each discipline.||.426||.542||.283||.556|
|16. I can map out the structure and main sections of an essay before writing the first draft.||.432||.342||.399||.463|
|17. I can find ways to concentrate when I am writing, even when there are many distractions around me.||.205||.671||.249||.554|
|18. I can find and correct my grammatical errors.||.672||.315||.033||.552|
|19. I can invest a great deal of effort and time in writing a paper when I know the paper will not be graded.||.310||.341||.303||.304|
|20. When I work with a writing tutor, I can learn new strategies that 8promote my development and success as a writer.||.057||.121||.714||.528|
|% of variance||25.92||17.52||12.78|
Figure 1. The mean writerly self-efficacy scores (+1 S.E.) for control and client groups.
Figure 2. The individual ratings for writing center clients in relation to writing center tutor ratings of the client.