Understanding and Assessing Fluency

Let's cut through the buzz around fluency and review what reading fluency is, why it is essential to ensure that our students have sufficient fluency, how fluency should be assessed, and how to best provide fluency practice and support for our students. We'll start by defining fluency.

While the National Reading Panel's definition of fluency as the ability to read text with accuracy, appropriate rate, and good expression (NICHD, 2000) is widely accepted among fluency researchers, these experts continue to debate the more subtle aspects of fluency (Stecker, Roser, and Martinez, 1998; Wolf and Katzir-Cohen, 2001). However it is defined, this much is certain: Fluency is necessary, but not sufficient*, for understanding the meaning of text. When children read too slowly or haltingly, the text devolves into a broken string of words and/or phrases; it's a struggle just to remember what's been read, much less extract its meaning. So it's important that teachers determine if their students' fluency is at a level appropriate for their grade. If not, how should it be developed? If a student is appropriately fluent for her grade level, how does a teacher help maintain that student's fluency? And, how does a teacher make these determinations? This process begins with assessments of the component pieces of fluency: prosody, accuracy, and rate.

The exact role of expression and phrasing — or prosody — in fluency and comprehension has not yet been determined, but it certainly is one element that signifies whether or not a student is truly a fluent reader. To measure the quality of a student's reading prosody, some educators rely on the four-level scale first developed for the 1992 National Assessment of Educational Progress (NAEP) in reading (Daane, Campbell, Grigg, Goodman, and Oranje, 2005). This scale focuses on the level of skill a student demonstrates in phrasing and expression while reading aloud (see below). After listening to an individual student read aloud, the educator rates the student's reading according to the level that best describes the student's overall performance.

National Assessment of Educational Progress Fluency Scale

Fluent

Level 4

Reads primarily in larger, meaningful phrase groups. Although some regressions, repetitions, and deviations from text may be present, these do not appear to detract from the overall structure of the story. Preservation of the author's syntax is consistent. some or most of the story is read with expressive interpretation.

Fluent

Level 3

Reads primarily in three- or four-word phrase groups. Some small groupings may be present. however, the majority of phrasing seems appropriate and preserves the syntax of the author. Little or no expressive interpretation is present.

Non-Fluent

Level 2

Reads primarily in two-word phrases with some three- or four-word groupings. Some word-by-word reading may be present. Word groupings may seem awkward and unrelated to larger context of sentence or passage

Non-Fluent

Level 1

Reads primarily word-by-word. Occasional two-word or three-word phrases may occur but these are infrequent and/or they do not preserve meaningful syntax.

A checklist developed by Hudson, Lane and Pullen (2005, p. 707) provides a more detailed assessment of a student's prosody:

  1. Student placed vocal emphasis on appropriate words.
  2. Student's voice tone rose and fell at appropriate points in the text.
  3. Student's inflection reflected the punctuation in the text (e.g., voice tone rose near the end of a question).
  4. In narrative text with dialogue, student used appropriate vocal tone to represent characters' mental states, such as excitement, sadness, fear, or confidence.
  5. Student used punctuation to pause appropriately at phrase boundaries.
  6. Student used prepositional phrases to pause appropriately at phrase boundaries.
  7. Student used subject-verb divisions to pause appropriately at phrase boundaries.
  8. Student used conjunctions to pause appropriately at phrase boundaries.

Although most researchers consider prosody important, the subjectivity of judging students' prosody makes it a difficult component of fluency to study. Many researchers have focused on the more easily quantifiable components of fluency (rate and accuracy) and, therefore, some basic questions about prosody — like what should be expected in second grade versus sixth grade — have not been answered. Nevertheless, students' prosody is an extra piece of information for making instructional decisions. When students' speed and accuracy are at appropriate levels, reading with proper phrasing, expression, and intonation should be the next goal.

To measure students' oral reading speed and accuracy, researchers have developed a simple and very brief procedure that uses regular classroom texts to determine the number of words that students can read correctly in one minute. To obtain a words-correct-per-minute (WCPM) score, students are assessed individually as they read aloud for one minute from an unpracticed passage of text.

To calculate the WCPM score, the examiner subtracts the total number of errors from the total number of words read in one minute. An error includes any word that is omitted, mispronounced, or substituted for another word. Words transposed in a phrase count as two errors (e.g., reading "laughed and played" instead of "played and laughed"). Each time a word is read incorrectly it is counted as an error. Words read correctly that are repeated more than once, errors self-corrected by the student, words inserted by the student that do not appear in the text, and words mispronounced due to dialect or speech impairments are not counted as errors. They do, however, impact the final score since they slow the student down and, therefore, reduce the number of words that are read correctly in one minute (Shinn, 1989).

If the passage is randomly selected from a text or trade book, an average score should be taken from readings of two or three different passages to account for any text-based differences. If standardized passages are used (in which the text has been carefully controlled for difficulty), a score from a single passage may be sufficient (Hintze and Christ, 2004). Standardized passages can be found in the Dynamic Indicators of Basic Early Literacy Skills-DIBELS (Good and Kaminski, 2002), the Reading Fluency Benchmark (Read Naturally, 2002), or Edformation's AIMSWeb materials.

To determine if the student's score is on target, the examiner compares it to the oral reading fluency norms (see Screening, Diagnosing, and Progress Monitoring: The Details). My colleague Gerald Tindal and I (2006) developed these national norms for grades one to eight by analyzing data that were collected using the procedures just described with over 200,000 students from 23 states. It's critical to understand that a WCPM score can be an alarm bell, a canary in a coal mine. If the WCPM is very low, the student is not sufficiently fluent and an intervention is merited. However, a low WCPM score may be the result of weak fluency skills or other reading weaknesses, for example, in decoding, vocabulary, sight words, etc.— so administering some diagnostic assessments may be necessary to determine exactly what type of intervention a student needs.

The canary in the coal mine

With all the assessments schools are required to administer as a result of No Child Left Behind, Reading First, and numerous statewide and district initiatives, some educators are concerned about over-testing students. They ask: "How can we justify spending so much precious instructional time testing our students over and over again?" This concern is certainly legitimate. The purpose of having our students in school is to teach them, not to test them. However, as professional educators, it is imperative that we make decisions about the instruction we provide our students based on the best information available.

The WCPM procedure just described is an extremely time-efficient and reliable way to track students' fluency — and their overall reading ability. While it may be surprising that a one-minute assessment can be so informative, WCPM has been shown, in both theoretical and empirical research, to serve as an accurate and powerful indicator of overall reading competence — especially through its strong correlation with comprehension. Its validity and reliability have been well established in a body of research extending over the past 25 years (Fuchs et al., 2001; Shinn, 1998).

The relationship between WCPM and comprehension has been found to be stronger in the elementary and junior high grades than in older students (Fuchs et al., 2001), likely due to the fact that as a reader matures, competent reading involves more complex skills, vocabulary, and knowledge (and thus any single measure becomes less predictive of general reading competence as a student develops).

Teachers can and should use WCPM as their canary in the coal mine — their first indicator that all may not be well with their students' reading ability.** In first through fifth grade, WCPM should be used to screen all students, help to diagnose a possible cause of struggling students' problems, and to monitor the progress of struggling students who are receiving additional support. To learn how, see Screening, Diagnosing, and Progress Monitoring: The Details.

About the author

Jan Hasbrouck is president of JH Consulting, as well as an affiliate of the Behavioral Research and Teaching Group at the University of Oregon. Her most recent book, which she co-authored with Carolyn Denton, is The Reading Coach: A How-To Manual for Success.

Citations

Hasbrouck, J. (2006). For Students Who Are Not Yet Fluent, Silent Reading Is Not the Best Use of Classroom Time. American Educator, Summer 2006, 30(2).

References

Daane, M.C., Campbell, J.R., Grigg, W.S., Goodman, M.J., and Oranje, A. (2005). Fourth-Grade Students Reading Aloud: NAEP 2002 Special Study of Oral Reading (NCES 2006-469). U.S. Department of Education. Institute of Education Sciences, National Center for Education Statistics. Washington, D.C.:Government Printing Office.

Fuchs, L.S., Fuchs, D., Hosp, M. K., and Jenkins, J.R. (2001). Oral reading fluency as an indicator of reading competence: A theoretical, empirical, and historical analysis. Scientific Studies of Reading, 5(3), 239-256.

Good, III, R.H. and Kaminski, R.A. (Eds.) (2002). Dynamic Indicators of Basic Early Literacy Skills (DIBELS), 6th Ed. Institute for the Development of Educational Achievement. Eugene, Ore.:University of Oregon.

Hasbrouck, J. and Tindal, G.A. (2006, April). ORF norms: A valuable assessment tool for reading teachers. The Reading Teacher, 59(7), 636-644.

Hintze, J.M. and Christ, T.J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33(2), 204-217.

Hudson, R.F., Lane, H.B., and Pullen, P.C. (2005, May). Reading fluency assessment and instruction: What, why, and how? The Reading Teacher, 58(8), 702-714.

National Institute of Child Health and Human Development (NICHD) (2000). Report of the National Reading Panel. Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction. NIH Publication No. 00-4769. Washington, D.C.:U.S. Government Printing Office.

Read Naturally (2002). Reading fluency monitor. Minneapolis: Author.

Shinn, M.R. (1989). Identifying and defining academic problems: CBM Screening and eligibility procedures. In M. R. Shinn (Ed.). Curriculum-based measurement: Assessing special children, 90-129. N.Y.:Guilford.

Stecker, S.K., Roser, N.L., and Martinez, M.G. (1998). Understanding oral reading fluency. In T. Shanahan and F.V. Rodriguez-Brown (Eds.), 47th yearbook of the National Reading Conference, pp. 295-310. Chicago:National Reading Conference.

Wolf, M. and Katzir-Cohen, T. (2001). Reading fluency and its intervention. Scientific Studies of Reading, 5(3), 211-239.

Endnotes

*Comprehension depends on reading skills (like decoding and fluency), but it also depends on vocabulary and background knowledge. To learn more about comprehension, see "Building Knowledge: The Case for Bringing Content into the Language Arts Block and for a Knowledge-Rich Curriculum Core for All Children" by E.D. Hirsch, Jr. in the Spring 2006 issue of American Educator, www.aft.org/pubs-reports/american_educator/issues/spring06/index.htm.

**There are also screening assessments that should be administered as early as kindergarten, to determine if students are on track for reading achievement. To learn more, see "Preventing Early Reading Failure" in the Fall 2004 issue of American Educator, www.aft.org/pubs-reports/american_educator/issues/fall04/reading.htm.

More by this author

Donate to Colorin Colorado

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.