The popularity of college ranking surveys published by U.S. News and World Report, Money magazine, Barron's, and many others is indisputable. However, the methodologies used in these reports to measure the quality of higher education institutions have come under fire by scholars and college officials. Also contentious is some college and university officials' practice of altering or manipulating institutional data in response to unfavorable portrayals of their schools in rankings publications.


In college rankings publications, as opposed to college guides which offer descriptive information, a judgment or value is placed on an institution or academic department based upon a publisher's criteria and methodology (Stuart, 1995, p. 13). In the United States, academic rankings first appeared in the 1870s, and their audience was limited to groups such as scholars, higher education professionals, and government officials (Stuart, 1995, pp.16-17). College rankings garnered mass appeal in 1983, when U.S. News and World Report's college issue, based on a survey of college presidents, was the first to judge or rank colleges (McDonough, Antonio, Walpole, and Perez, 1998, p. 514). In today's market, the appeal of college ranking publications has increased dramatically. Time magazine estimates that prospective college students and their parents spend about $400 million per year on college-prep products, which include ranking publications (McDonough et al., 1998, p. 514).

Popularity of College Rankings

Hunter (1995) believes that the popularity of rankings publications can be attributed to several factors: growing public awareness of college admissions policies during the 1970s and 1980s; the public's loss of faith in higher education institutions due to political demonstrations on college campuses; and major changes on campus in the 1960s and 1970s such as coeducation, integration, and diversification of the student body, which forced the public to reevaluate higher education institutions (p. 8). Parents of college-bound students may also use reputational rankings that measure the quality colleges as a way to justify their sizable investment in their children's college education. (McDonough et al., 1998, p. 515-516).

College Reliance on Rankings and General Criticisms of the Rankings Publications

College administrators have increasingly relied on rankings publications as marketing tools, since rising college costs and decreasing state and federal funding have forced colleges to compete fiercely with one another for students (See Hossler, 2000; Hunter, 1995; McDonough et al., 1998). According to Machung (1998), colleges use rankings to attract students, to bring in alumni donations, to recruit faculty and administrators, and to attract potential donors (p. 13). Machung asserts believes that a high rank causes college administrators to rejoice, while a drop in the rankings often has to be explained to alumni, trustees, parents, incoming students, and the local press (1998, p. 13).

Criticisms of rankings publications have proliferated as scholars, college administrators, and higher education researchers address what they perceive as methodological flaws in the rankings. After reviewing research on rankings publications, Stuart (1995) identified a number of general methodological problems: 1) Rankings compare institutions or departments without taking into consideration differences in purpose and mission; 2) Reputation is used too often as a measure of academic quality; 3) Survey respondents may be biased or uninformed about all the departments or colleges they are rating; 4) Rankings editors may tend to view colleges with selective admissions policies as prestigious; and 5) One department's reputation may indiscriminately influence the ratings of other departments on the same campus (pp. 17-19).

U.S. News and World Reports "America's Best Colleges"

The most specific criticism has been directed against U.S. News and World Report's, "America's Best Colleges," published since 1990 and the most popular rankings guide. Monks and Ehrenberg (1999) investigated how U.S. News determines an institution's rank, basing their study on statistics from U.S. News' 1997 publication. They found that U.S. News takes a weighted average of an institution's scores of in seven categories of academic input and outcome measures as follows: academic reputation (25%); retention rate (20%); faculty resources (20%); student selectivity (15%); financial resources (10%); alumni giving (5%); and graduation rate performance (5%) (Monks and Ehrenberg, 1999, p. 45). These categories were further divided and 16 variables were used as measurements. McGuire (1995) asserts that the variables U.S. News uses to measure quality are usually far removed from the educational experiences of students (McGuire, 1995, p. 47). For example, U.S. News measures the average compensation of full professors, a sub factor of the faculty resources variable mentioned above. McGuire argues that this variable implies that well-paid professors are somehow better teachers than lower-paid professors-an implication unsupported by direct evidence.. He says that "In the absence of good measures, poor measures will have to suffice because the consumer demand for some type of measurement is strong and the business of supplying that demand is lucrative" (McGuire, 1995, p. 47). Along the same lines, Hossler (2000) believes that better indicators of institutional quality are outcomes and assessment data that focus on what students do after they enroll, their academic and college experiences, and the quality of their effort (p. 23).

Monks and Ehrenberg (1999) found that U.S. News periodically alters its rankings methodology, so that "changes in an institution's rank do not necessarily indicate true changes in the underlying 'quality' of the institution" (p. 45). They contend note, for example, that the California Institute of Technology jumped from 9th place in 1998 to 1st place in 1999 in U.S. News, largely due to changes in the magazine's methodology (Monks and Ehrenberg, 1999, p. 44). Ehrenberg (2000) details how a seemingly minor change in methodology on the part of U.S. News can have a dramatic effect on an institution's ranking (p. 60). Machung (1998) states that "The U.S. News model itself is predicated upon a certain amount of credible instability" (p. 15). The number one college in "America's Best Colleges" changes from year to year, with the highest ranking fluctuating among 20 of the 25 national universities that continually vie for the highest positions in the U.S. News rankings (Machung, 1998, p. 15). Machung asserts that "new" rankings are a marketing ploy by U.S. News to sell its publication (1998, p. 15).

Although eighty percent of American college students enroll in public colleges and universities, these schools are consistently ranked poorly by U.S. News (Machung, 1998, p. 13). Machung (1998) argues that the U.S. News model works against public colleges by valuing continuous undergraduate enrollment, high graduation rates, high spending per student, and high alumni giving rates (p. 13). She also contends that the overall low ranking of public colleges by U.S. News is a disservice to the large concentration of nontraditional students (over 25, employed, and with families to support) enrolled in state schools (Machung, 1998, p. 14).

College and University Responses to Rankings

College and university officials have responded to the unfavorable or undesirable rankings placement of their institutions in a variety of ways. Some ignore the rankings, others refuse to participate in the surveys, and many respond by altering or misrepresenting institutional data presented to rankings publications (See Stecklow, 1995; Machung, 1998; Monks and Ehrenberg, 1999). By examining the inconsistencies between the information colleges presented to guidebooks and the information they submitted to debt-rating agencies in accordance with federal securities laws, Stecklow (1995) has documented how numerous colleges and universities have manipulated SAT scores and graduation rates in order to achieve a higher score in the rankings publications (p. A1). He noted that many colleges have inflated the SAT scores of entering freshman by deleting the scores from one or more of the following groups: international students, remedial students, the lowest-scoring group, or and learning disabled students. Although many college officials admit that this practice raises ethical concerns, they continue these manipulations because there are no legal obstacles preventing such action. Stecklow asserts says that many surveyors such as Money magazine, Barron's, and U.S. News do not always check the validity of the data submitted to them by colleges (1995, p. A1).

Balanced Approach

Since many published rankings have been perceived as biased, uninformative, or flawed, a number of higher education practitioners encourage parents and prospective students to do their own research on colleges, to view alternative college prep publications, and to view the rankings publications with a critical eye.


Ehrenberg, R.G. (2000). Tuition rising: Why college costs so much. Cambridge, MA: Harvard University Press.

Hossler, D. (2000). The problem with college rankings. About Campus, 5 (1), 20-24. EJ 619 320

Hunter, B. (1995). College guidebooks: Background and development. New Directions for Institutional Research, 88. EJ 518 243

Machung, A. (1998). Playing the rankings game. Change, 30 (4), 12-16. EJ 568 897

McDonough, P.M., Antonio, A.L., Walpole, M., & Perez, L.X. (1998). College rankings: Democratized college knowledge for whom? Research in Higher Education, 39 (5), 513-537. EJ 573 825

McGuire, M.D. (1995). Validity issues for reputational studies. New Directions for Institutional Research, 88. EJ 518 247

Monks, J., & Ehrenberg R.G. (1999). U.S. News & World Report's college rankings: Why they do matter. Change, 31 (6), 43-51.

Rankings caution and controversy. Retrieved April 29, 2002, from the Education and Social Science Library, University of Illinois at Urbana-Champaign:

Stecklow, S. (1995, April 5). Cheat sheets: Colleges inflate SATs and graduation rates in popular guidebooks - Schools say they must fib to U.S. News and others to compete effectively - Moody's requires the truth. The Wall Street Journal, pp. A1.

Stuart, D. (1995). Reputational rankings: Background and development. New Directions for Institutional Research, 88. EJ 518 244