Seven Myths about Literacy in the United States (page 3)
Serious problems exist with reading achievement in many United States schools. However, much of the commonly accepted wisdom about the academic performance of United States students is false. The best evidence we have on the reading crisis indicates that no crisis exists on average in United States reading. The purpose of this digest is to investigate seven of the most prevalent--and--damaging myths about literacy achievement in the United States.
Myth 1: Reading Achievement in the United States has Declined in the Past Twenty-Five Years
The best evidence on reading achievement in the United States comes from a national system of examinations established back in the late 1960s by the federal government to determine how United States schoolchildren were performing in a variety of school subjects. These exams, known as the National Assessment of Educational Progress (NAEP) are important barometers of educational achievement. They are given nationally to a representative sample of United States children.
When the test was first administered in 1971, the average reading proficiency score for nine-year-old children was 208, for thirteen-year-old children was 255, and for seventeen year-old children was 285. The results of the most recent administration of the test (1996) revealed that the average reading proficiency score for nine-year-old children was 212, for thirteen-year-old children was 259, and for seventeen-year-old children was 287. These scores indicate that, despite a few minor shifts, reading achievement has either stayed even or increased over the past thirty years.
Myth 2: Forty Percent of U.S. Children Can't Read at a Basic Level
During the early years of the NAEP tests, the Department released only the raw scores for each age level on its 0 to 500 scale, with no designations of which score was thought to constitute "basic knowledge" or "proficiency." The designers of the NAEP test later decided that simply reporting the raw scores was no longer adequate in order to judge the progress of United States schools. The Department decided it would determine how well students were reading by establishing the minimum score constituting "below basic," "basic," "proficient," and "advanced" reading. The "basic" level for fourth-grade reading, for example, was fixed at a score of 208. In 1994, 40% of United States children scored below the "basic" cutoff of 208.
The problem with this approach lies in "objectively" determining where these cutoff points should be. Glass (1978), after reviewing the various methods proposed for creating "minimal" criterion scores of performance, concluded that all such efforts are necessarily arbitrary. Of course, such arbitrary cutoff points already exist in education and many other fields, but at least they are recognized as arbitrary and not given the status of absolute or objective levels of competence. In 1991, the General Accounting Office (GAO) examined how the NAEP defined their levels of proficiencies and found their methods to be questionable (Chelimsky, 1993).
Myth 3: Twenty Percent of our Children are Dyslexic
Closely related to the previous misconception that 40% of our students read below the "basic" level is another portentous-sounding figure that indicates 20% of United States schoolchildren suffer from a "neuro-behavioral disorder" known as "dyslexia" (Shaywitz et al., 1996). The research most often cited to support this claim is drawn from the results of the Connecticut Longitudinal Study (CLS), a large-scale project funded in part by the National Institute of Child Health and Human Development (e.g. Shaywitz, Escobar, Shaywitz, Fletcher & Makuch, 1992; Shaywitz, Fletcher & Shaywitz, 1994). The CLS tracked over 400 students from kindergarten through young adulthood, periodically measuring their Intelligence Quotient (IQ), reading achievement, and mathematical abilities, among other attributes. CLS researchers measured "reading disability" by two methods. The first is what is known as "discrepancy scores," which represent the difference between a child's actual reading achievement and what would be predicted based upon his IQ. The idea is that if you have a high IQ but are poor at reading, then something must be wrong with you. The actual size of the discrepancy used in the CLS studies was that recommended by the United States Department of Education, 1.5 standard deviations. This 1.5 standard deviation figure was thus their "cutoff" score used to determine who was reading "disabled" and who was not. In any given year, a little less than 8 percent fall into the category of reading disabled, using the 1.5 cutoff.
Two important things need to be noticed about these results. First, and most importantly, the 1.5 standard deviation cutoff point is arbitrary. We could just as easily have used 1.25 or 1.75 or .5, each producing a different percentage of "neuro-behaviorally" afflicted children. Second, even the 8% have not been shown in this research to be "dyslexic," if by "dyslexic" we mean a "neurologically based disorder in which there is unexpected failure to read," the definition used by the CLS team (S. Shaywitz et al., 1992, p. 145; emphasis added). This is because, quite simply, no neurological measures were administered to these particular children. All that can be said from these findings is that around 8 percent of children in any given year will have a discrepancy of 1.5 standard deviations between their IQ and reading achievement, at least if they live in Connecticut.
Myth 4: Children from the Baby-Boomer Generation Read Better Than Students Today
Some argue today's reading levels are dismal compared to those of the 1940s or 1950s. This evidence comes from a study of adult literacy levels, the National Adult Literacy Survey (NALS), which was given to a representative sample of United States adults in 1992 (Kirsch, Jungblut, Jenkins & Kolstad, 1993). McGuinness (1997) notes that those who learned to read in the mid-1950s to mid-1960s have higher reading scores than those of later generations.
Can we really measure the effectiveness of schools 40 years ago by how well their graduates read today? What about the intervening 30 years of reading experience and education? We should hardly expect the reading proficiency of these adults to remain stagnant over time. Surely the reading scores of this group of 35-44-year-olds from when they were still enrolled in school are better indicators of how well they performed as children, since fewer intervening variables then exist to confound the results. We do, in fact, have reading achievement scores from a representative sample of children of this age cohort in the form of the high school NAEP scores from 1971 (for those who entered first grade in 1959 and were 38 at time of the NALS administration). Their scores are not much different than more recent graduates.
Myth 5: Students in the United States are Among the Worst Readers in the World
What will come as most surprising to many people is how the United States compares internationally in reading achievement: Our nine-year-olds ranked second in the world in the most recent round of testing conducted by the International Association for the Evaluation of Educational Achievement (IEA); our fourteen-year-olds ranked a very respectable ninth out of 31. A dissenting opinion on just how well United States schoolchildren perform over time and internationally is held by Walberg (1996), who argues that reading achievement has in fact declined since the early 1970s. Walberg compared the IEA scores from 1990-91 to the first IEA test given to 15 nations in 1970, with the scores from the two tests equated (Lietz, 1995, cited in Walberg). Walberg (1996) concluded that the scores did indeed decline, from 602 in 1970 to 541 in 1991 (using his adjusted scores).
Two problems exist with this analysis, however. First, it is not clear why the two IEA tests given 22 years apart should be preferred in measuring trends in United States reading performance over the United States Department of Education's own NAEP exam, which has not only been given more frequently (9 times since 1970), but was designed to be much more sensitive to a broader range of reading achievement (Binkley & Williams, 1996) than the IEA tests. Second, the IEA test has changed considerably since its first administration in 1970 (Elley, 1994). Unfortunately, the reanalysis of the scores upon which Walberg bases his comparisons is unpublished, making it difficult to know precisely how these "equated" scores were derived from what were markedly different tests.
Reprinted with the permission of the Education Resources Information Center.
Add your own comment
Today on Education.com
WORKBOOKSMay Workbooks are Here!
WE'VE GOT A GREAT ROUND-UP OF ACTIVITIES PERFECT FOR LONG WEEKENDS, STAYCATIONS, VACATIONS ... OR JUST SOME GOOD OLD-FASHIONED FUN!Get Outside! 10 Playful Activities
- Kindergarten Sight Words List
- The Five Warning Signs of Asperger's Syndrome
- What Makes a School Effective?
- Child Development Theories
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development
- 10 Fun Activities for Children with Autism
- Test Problems: Seven Reasons Why Standardized Tests Are Not Working
- Bullying in Schools
- A Teacher's Guide to Differentiating Instruction
- Steps in the IEP Process