By David A. Tomar
The college ranking business comes from humble beginnings. A century ago, rankings were largely honorary in nature and hardly carried the fanfare or economic consequence of modern ranking lists.
Among the first recognized rankings was a study called Where We Get Our Best Men, published by Alick Maclean in 1900. MacLean’s text was mostly dedicated to profiling those so-called Best Men. However, its most consequential feature was an index which ranked universities based on “the absolute number of eminent men who had attended them.” Four years later, a fellow Englishman named Havelock Ellis compiled a similar list, this one based on the number of affiliated “geniuses,” as opposed to “eminent men.”
Neither list proposed to measure or rank the quality of universities based on this metric. However, as this methodology made its way across the Atlantic, the American spirit of competition took hold. In his 1910 publication, American Men of Science, author James Cattell made explicit the connection between alumni achievements and comparative university excellence, particularly in a table called “Scientific Strength of the Leading Institutions.”
Cattell even issued his publication with the advice that “students should certainly use every effort to attend institutions having large proportions of men of distinction among their instructors.”
This would be the guiding principal behind the dominant ranking strategy of the next several decades. Beginning in 1930 and lasting through the 1950s, Prentice and Kunkel published an annual report that ranked colleges based on how many of their alumni appeared “in the social bible Who’s Who.”
Though their ranking was considered an outcomes-based way of evaluating colleges, Prentice and Kunkel were objectively honest about the inherent weaknesses in their approach. In a 1951 study, they conceded that their rankings were likely skewed because their primary source, Who’s Who, suffered from an overrepresentation of ministers and college professors and an underrepresentation of engineers.
This concession would serve as prelude to a major transformation in the ranking business. Gradually, the outcomes-based approach to ranking would fall out of fashion as reputation-based measures became de rigueur. The Center for College Affordability traces this reputation-based approach to a 1959 ranking in which the University of Pennsylvania sought to measure itself against other American research universities. The study’s author identified chairpersons at 25 top universities, all members of the Association of American Universities. These chairs were consulted as raters, a model which would ultimately lay the groundwork for the emergence of survey-driven, reputation-based rankings.
Over the next several decades, scholars from all around the globe offered their own methodological refinements to the process of reputational ranking. But for the better part of the 1960s and 1970s, these rankings were of greatest interest to academics. Rankings hadn’t yet penetrated the mainstream consciousness yet. That was all about to change dramatically.
The Modern Ranking Sector
1983 was an inflection point. That was when a magazine called U.S. News & World Report published its first ranking list of “America’s Best Colleges.” Driven entirely by survey responses, these reputational rankings would have an immediate and profound impact on the higher education marketplace. They would also set into motion the development of the broader ranking industry. The magazine began publishing the report annually starting in 1987 and since that time, has become the most frequently quoted of American college ranking outlets. Today, these annual rankings hold powerful sway over how colleges are perceived by students, parents, alumni, and employers. Each year, their ranking lists are met with equal parts excitement and critique.
At its inception, the US News & World Report ranking was entirely subjective, collecting its survey responses from university and college presidents. Starting in 1988, U.S. News undertook an effort to incorporate more meaningful quantitative data in its rankings. Since that time, chief data strategist Robert Morse has presided over an ever-evolving methodology.
By 2010, its lists had become so popular that U.S. News moved largely away from its news magazine format, making franchise ranking the dominant part of its business strategy.
For most of its history, U.S. News has focused solely on American colleges. Only in 2014 did the ranker unveil its Best Global Universities rankings. By this time, a number of leaders had already emerged in the international ranking game. The Academic Ranking of World Universities (ARWU) was first among them.
The ARWU was originally compiled and issued by Shanghai Jiao Tong University and is thus commonly referred to as the Shanghai Ranking. The inaugural Shanghai Ranking was issued in 2003 and, as an empirical ranking of universities on a global scale, was the first of its kind. It remains highly influential in the international ranking sector. Not only was Shanghai distinct from U.S. News in its global scope but it also forged a ranking methodology that eschewed reputational metrics in favor of a return to strictly outcomes-based metrics. Shanghai arrives at its rankings using criteria that are quite distinct from those used by U.S. News. Its outcomes are, likewise, quite different.
So too are the ranking outcomes produced by Quacquarelli Symonds (QS) World University Rankings, the next prominent player to emerge in the global ranking sector. QS began producing its rankings in conjunction with the publication Times Higher Education (THE) in 2004. Based out of Britain, QS is something of a hybrid, combining Shanghai’s global focus with U.S. News & World Report’s mix of reptutational and quantitative data. As critics are often quick to point out, QS relies more heavily on reputational data than any of its competitors.
Each year between 2004 and 2009, THE and QS collaborated to release an annual table of international university rankings. However, the two parties separated in 2009, with THE citing empirical weakness in the QS ranking methodology. Beginning in 2010, Times Higher Education began publishing its own THE World University Rankings.
THE would quickly emerge as another major player in the ranking sector, applying its own methodological refinements while employing a similar mix of reptutational and quantitative indicators.
Today, the field of college rankings abounds with competitors. Other notable ranking outlets include Forbes, The Princeton Review, Bloomberg Business, and the Washington Post, which only recently began producing rankings based on a composite of other rankings. There’s also a ranker called PayScale, whose College ROI Report simply measures the cost of your college education relative to your likely earnings upon graduation.
The Department of Education has also gotten into the business, with its College Scorecard distilling certain practical indicators meant to evaluate the economic value of each college or university in its ranking.
Also, as noted from the outset, TBS Magazine is affiliated directly with TheBestSchools.org and Influence Networks. TheBestSchools.org is unique among ranking services for the sheer (and ever-expanding) variety of rankings it offers, across a broad range of categories relating to discipline, academic quality, affordability, and model of education.
Influence Networks, a new entrant in this sector, is also wholly unique among existing services. Using an algorithm that assigns influence scores to notable academic figures, and ranks university programs by their affiliation with these influencers, Influence Networks produces a ranking strictly quantified according to apparent influence across the internet.
This mix of major players and new entrants suggests a competitive and expanding college ranking sector.