A New Philosophy of Academic Ranking: Rethinking College and University Rankings from the Ground Up

The staff of AcademicInfluence.com has long experience in doing higher-education rankings. We have worked on some of the most trafficked and profitable ranking websites in the “education space.” To our credit (and the same cannot be said for the industry as a whole), in our ranking of colleges and universities, we never raised the rank of a school for gain or benefit, nor did we ever lower one for lack thereof.

Yet, throughout our work on college rankings, we always felt uneasy. This is not say that we thought the rankings we created were valueless or misleading. We worked hard on our rankings and always thought they provided some useful insights to our readers. Yet we also felt that they left much to be desired.

At their best, our rankings gave schools deserved recognition for significant academic achievements, and thereby also gave prospective students good reason to attend such schools. At their worst, however, the rankings we did displayed an arbitrariness, with no compelling rationale for why one school should be rightly regarded as higher in the ranking than another.

The rankings from our former life could always be defended, and we would provide long methodology statements about the rules governing their formation and about the numerical scores assigned to schools or degree programs to determine rank order. But at the end of the day, even though our rankings were in some broad sense defensible (what, after all, cannot be defended with a good song and dance), they fell short of being unimpeachable.

In our past life working in the academic ranking industry, the full logic, rigor, and trustworthiness of our rankings were always in question, and rightly so because we could never fully realize these desired goals. So, after years of working in this industry, we grew weary of never having a product that we could fully get behind.

We wanted rankings that were invulnerable, unassailable, convincing, dependable, authoritative. We wanted rankings that avoided the fallacy of misplaced exactness, in which numerical scores and weights gave an appearance of rigor, but in fact lacked it. And on a personal level, we wanted rankings that the schools ranked by us could fully get behind — where the schools would recognize that we had uncovered something true and valuable about them in our rankings.

In the end, we concluded that the academic ranking industry in which we had been leading participants was beyond repair and had to be rethought from the ground up. It’s for this reason that we started AcademicInfluence.com. We believe that through our InfluenceRanking engine and our personalized filtering tools, we are able to redress many of the problems and deficiencies that beset the academic ranking industry.

But before we make the case that AcademicInfluence.com is finally doing academic rankings right, we need to get more specific about how the academic ranking industry, in its practices to date, is getting things wrong. This charge is not meant as a blanket condemnation. We’ve been there, and most people in this industry (leaving aside a few bad actors, which any industry will have) are trying to provide value. But it will help to see where things went wrong.

Multi-criteria Optimization: The Point of Failure in Academic Rankings

“Multi-criteria Optimization” is a mouthful, and might seem an unlikely villain in our story, but the abuse of this concept lies at the heart of why existing college and university rankings are problematic.

What is multi-criteria optimization? Let’s start with optimization. Optimization is about looking at items with numerical scores and sorting them to find the items with the highest scores. Ranking thus becomes a form of optimization, with things ranking higher to the degree that they score higher. Where things get tricky is when the optimization needs to be done across multiple criteria — hence muticriteria optimization.

In contrast to multi-criteria optimization, a single criterion (or uni-criterion) optimization is straightforward: identify one criterion and simply optimize across it. Take the ACT or SAT test. Schools report to the U.S. Department of Education the average ACT and/or SAT scores of their enrolled students. These scores are readily available at the Department of Education’s Integrated Post-secondary Education Data System (IPEDS for short). From there, it’s a simple matter to rank schools by those whose students have, say, the highest average ACT test scores, and from there work one’s way down.

While a uni-criterion approach to ranking schools is conceptually straightforward, its rankings tend to be bland and generate little excitement. That’s understandable. Imagine a big academic ranking company touting its latest list of “best colleges” by focusing entirely on average performance of incoming students on the ACT or SAT. How interesting is that?

That’s not to say such a uni-criterion ranking is without any interest at all. But the interest will be quite limited, and no academic ranking company can stay in business by touting such a simple ranking method. With the ACT or SAT, for instance, anyone with spreadsheet software can download the relevant data from the Department of Education and reconstruct such a ranking directly.

For ranking companies to stay in business and earn their keep, they need therefore to be doing more than simply optimizing one and only one criterion. That is, of course, unless they find a master criterion that captures what is truly best about higher education. (More on this in a later section, where we make the case that at AcademicInfluence.com we have indeed found such a master criterion.) But the point for now is this: each single criterion considered by existing academic ranking organizations to date has come up short, which has led, universally in this business, to the use of multiple criteria and their joint optimization.

So, in order to be seen as doing something valuable and interesting, all existing academic ranking companies engage in multi-criteria optimization. This assertion holds without exception. It is true that some popular rankings focus exclusively on a single criterion. For instance, Forbes, the money and business magazine, has traditionally ranked schools in terms of return on investment (ROI). ROI is their master criterion, and they are in their rights to use it — it is probably the most relevant criterion for their readership. Even so, Forbes is not primarily in the business of ranking schools and degree programs.

Organizations for whom academic rankings are their lifeblood, however, have turned multi-criteria optimization into an art form. The most prominent of these ranking organizations are

For each of these four ranking organizations, we give the Wikipedia link because, in each case, the corresponding Wikipedia page includes a section on methodology that makes clear the different criteria being used by the ranking organization and the precise weights being applied to each criterion. So, for instance, with the U.S. News ranking, 20 percent of the ranking depends on reputation surveys, 10 percent on per-student academic spending, and 5 percent on alumni giving (there are, of course, other criteria, so that the total percentage weights add up, as they must, to 100).

But the weights just listed are for 2020. Different years can change the weights and even the criteria being weighted. To this point, in 1983, when the first U.S. News rankings were released, they focused 100 percent on “academic reputation.” So, originally, the U.S. News ranking was in fact a uni-criterion ranking! U.S. News even touts how they’ve changed criteria and weights over time in an infographic that can be viewed by clicking here. This infographic illustrates how many additional criteria U.S. News has added since the inception of their rankings in 1983.

Back to Top

Problem 1: Change for Change’s Sake

All the major academic ranking organizations use this multi-criteria optimization approach. So what’s wrong with it? Three things. First off, it introduces a perverse incentive on the part of the ranking organizations to fiddle with the weights and criteria in order to constantly generate new rankings.

It’s like cars. Every year, automobile manufacturers release the newest model. Now, because there is healthy competition among car makers and because technology keeps improving, there can at least be good reason for car companies to keep releasing the latest annual models. A hundred years ago, when technology wasn’t moving as fast, it was sufficient for Henry Ford to build the same Model T year-in and year-out. But by the late 1920s, with affluence in the U.S. increasing, we developed the habit of expecting a new car model every year.

Unlike cars, however, where technological advance justifies new annual models, the justification for annual updated college and university rankings is less clear. To be sure, if criteria and weights are fixed, then the ranking can change because the schools being ranked have themselves changed the inputs they are providing to the ranking methodology. Thus, for the criterion of alumni giving with a 5 percent weight attached, as with the 2020 U.S. News ranking, a school may rise in the ranking if alumni giving suddenly increases (or it may go down if giving suddenly decreases).

But short of such ranking differences resulting from the colleges and universities themselves changing and supplying different inputs to the ranking methodology, it’s not clear what the latest ranking is doing except mixing and matching criteria and the percentage weights attached to them. Such changes are often arbitrary, offering no clear rationale, and giving no sense that they are in any way optimal.

Does a 5 percent weight on alumni giving really capture the proportionate contribution of alumni giving to academic excellence? Should it be 6 percent instead? or 4 percent? Often it seems these academic rankings are arbitrarily applying different weights to different criteria, but doing no actual optimization. Yet if these rankings are multi-criteria non-optimizations, in what sense can they rightly be said to identify “the best colleges, universities, and degree programs”? Certainly, the major ranking organizations do not want to leave the impression that their criteria and weights were simply tossed out of a hat. But they have little defense when pressed on this point.

Often one gets the sense that all the fiddling with criteria and weights surrounding these rankings are simply there to obtain novelty for the sake of novelty. In fact, the major ranking organizations might be seen as offering not so much a ranking methodology as a ranking modification technique for changing past rankings. The point of such a modification technique then becomes to ensure that the current annual rankings are sufficiently different from those of the past so that people pay attention when the new rankings are released — after all, where would the interest be if year-in and year-out the same schools ranked in the same order?

Back to Top

Problem 2: The Temptation to Game

Schools are not static entities. They invariably change over time, sometimes getting better, sometimes getting worse, and sometimes substantially so. The ability of schools to change, and especially to change themselves, leads to the second big problem with the multi-criteria optimization approach to academic ranking. Because this approach depends on explicitly identifying both the criteria and the percentage weights assigned to them, schools have the perverse incentive to look at these and game them.

One might think that by making the criteria and weights explicit, the major ranking organizations are in fact giving the people who study their rankings full transparency. And transparency is always a good thing, isn’t it? Not so fast. The transparency here turns out to be a pitfall, inviting schools to game the rankings.

Is more alumni giving going to help our school rank better? In that case, let’s get our development people to raise more money from alumni. Does throwing more money at students raise our school’s ranking? In that case, let’s spend more money on them (and, of course, raise tuitions to make up the difference).

The incentives here can be perverse. We’ve seen it happen where a school raises tuitions, spends more money on students, doesn’t actually improve the education of students, and yet rises in a major ranking. Conversely, in the very same ranking, we’ve seen another school, by attempting to make its education more affordable to low-income students, spend less on students, without compromising the quality of the education, and thus be rewarded for its efforts by having its rank lowered.

Because so much is riding for schools on the major rankings (application rates, alumni support, perceived standing), schools respond to the latest rankings with deadly earnestness. Something must be done to improve, or at least not to fall behind, in the rankings. But what actually gets done often has little to do with actually improving the education offered.

It’s like what has happened with standardized testing in primary and secondary schools. Because states put a premium on students doing well on these tests, rewarding teachers and school districts whose students score high, students are taught how to take the test rather than to actually learn the subject being tested. So instead of being taught math or English for genuine comprehension, students are taught to score well on the standardized test about math and English.

Colleges and universities trying to improve their ranking face the same temptation as school districts intent on raising test scores. With college and universities, the temptation is to change themselves to suit the ranking rather than to genuinely improve. In other words, the temptation is to put the cart before the horse. But, as always, the horse, in this case doing what’s needed to actually improve a school, needs to precede the cart, in this case rising in the ranking. An improvement in college rankings should be a byproduct of an improvement in the college itself.

In voicing the perverse incentives that drive colleges and universities to game the big rankings, we don’t mean to moralize or make ourselves sound superior. Colleges and universities are in a tough spot. The big rankings have a material impact on schools. These rankings cannot simply be ignored. Nor is doing what worked yesterday sufficient for today. College and university rankings constitute a zero-sum game. If one school goes up in a ranking, another must come down. And with all schools striving to move up, what essentially emerges is an arms race, leading to a mad dash for ranking superiority.

Back to Top

Problem 3: Reputation Surveys and Other Subjective Criteria

The criteria used by the major ranking companies to assess academic excellence come in two forms: objective and subjective. As we’ve seen in the last two sections, even objective criteria can be gamed and therefore abused. Alumni giving and student spending are both objective, defined by clear dollar amounts, and yet schools have figured out ways around them.

However, three of the four the big ranking companies also use subjective criteria in the form of reputation surveys. Shanghai is the exception, but its criteria, by, for instance, putting a premium on Nobel Prizes and other awards, makes it even more vulnerable than the others on the earlier two problems raised. Leaving aside Shanghai, the other three ranking companies assign weights of at least 20 percent to reputation surveys, and in the case of Quaquarelli Symonds much more. Such subjective criteria lead to the third big problem facing the multi-criteria optimization approach to academic ranking, which is that they depend on opinions, and opinions can be biased, deceptive, and even ludicrous.

Does it really help to learn that the president of Yale thinks that Harvard has a stronger academic reputation than Stanford? The reputation survey of the first U.S. News & World Report ranking back in 1983 consisted entirely of such gossip among college and university presidents. Sure, the questions in today’s reputation surveys have been refined, but the underlying faults with them have not gone away. Indeed, the faults with subjectivity in academic rankings remain many. As we have explained elsewhere on this website:

  • Most of the leading ranking organizations employ ranking approaches with subjective elements. Many depend heavily on self-reports, in which survey responders detail their own personal assessment or perception of a school’s characteristics, such as reputation. The problems with self-reports include lying (making a school seem better or worse than it may actually be), misperception (simply being mistaken in one’s perception of the school), and moral hazard (perverse incentives to portray a school one way rather than another).
  • Rankings of schools by alumni on a five-star scale or by their salary data after graduation face these same problems, as well as a selection effect: What sort of alumni tend to give a five-star rating to a school from which they graduated? What if people like to think well of things into which they invested time and money? What if they want to warn people about mistakes they’ve committed? And which impulse is stronger? In any case, such rankings are not only subjective but can be skewed. Ditto for salary reports: Do people working minimum wage upon graduation really want the world to know this fact, even if they share this knowledge anonymously?
  • We’re not saying that subjective ranking approaches like this have no value. At their best they provide social validation, which can be valuable. It’s just that at AcademicInfluence.com, we have become convinced, through our long experience with academic rankings, that such ranking approaches, especially without extensive caveats, do more to mislead than enlighten. In place of rankings that employ subjective elements, AcademicInfluence.com therefore focuses on ranking metrics whose numbers are based on precise mathematical calculations and which are drawn from unambiguous, publicly available data. No hand-waving, no charades, no voodoo.
Back to Top

Influence: The Master Criterion in Academic Rankings

In hitting the reset button for college and university rankings, our team at AcademicInfluence.com began by going back to first principles. Thus we asked, What makes a school a great place for learning in the first place? What is it for a school to be academically excellent? How should a school’s academic excellence be gauged? What does it really mean to say that a school is “best”?

As we reflected on these questions, it became clear that answering them centered on the people associated with a school and their success in advancing knowledge. Thus a school is academically excellent if it has academically excellent faculty and alumni, which is to say faculty and alumni who have significantly influenced their fields of study. Schools that are academically excellent therefore derive their academic excellence from the academic influence of their faculty and alumni.

Granted, most schools are satisfied if their alumni make lots of money and donate it to the school. They would even be delighted and shocked, given what they pay faculty, if faculty donated lots of money to the school. But if we are trying to gauge a school’s academic excellence, then we need to be looking at faculty and alumni in terms of how deeply influential they are in their academic fields of study, and not on other forms of influence, such as monetary influence.

As evidence that faculty and alumni play a crucial role in academic influence, consider that when a school lists the number of Nobel Prizes associated with it, the list includes both faculty and alumni who have received the award. It speaks to the academic excellence of a school that it has an influential faculty. Likewise, it speaks to the academic excellence of a school that it has raised up influential alumni. And best of all is to have both, which many top schools do.

In short, what makes a school a great place of learning is its record of academic accomplishment, which consists of the scholarly output of its faculty and of the people these faculty have trained, namely, the alumni. This is not to say that great teaching is secondary to a great school. Some will argue that the great academic influencers, by being so focused on innovation, are often substandard teachers. We disagree. History shows that, other things being equal, influential students/alumni are more likely to have learned at the feet of influential faculty.

So that’s it: we gauge academic excellence, and thus what it means for a school to be “best,” by the academic influence of its faculty and alumni. To give such pride of place to academic influence is not the end of the story, however. In fact, it means that our work has just begun. Academic influence, as a term and concept, sounds right as the master criterion for judging academic excellence. But what’s needed next is a precise operational definition of this concept so that it can be used to rank schools in a compelling way.

“Academic influence” can’t just be a weasel term, a mere label that we put on things to give the illusion that we’ve solved a problem when in fact we haven’t. Academic influence, as we cash it out, does indeed have a precise sense that allows us to rank schools and disciplinary programs in a way that captures what is truly best about higher education. But the devil is in the details. So it’s at this point that we need to direct readers to three articles that lay out the details. These articles, which are best read in order, are the following:

  • Methodology,” which examines how and why we rank persons and institutions by influence,
  • The InfluenceRanking™ Engine,” which explores the nuts and bolts of our influence-ranking technology, and
  • Concentrated Influence™,” which describes a correction factor to our influence-based ranking metric that’s ideally suited, in certain instances, for drilling down on academic excellence.

To give the barest overview, however, our influence-based approach to academic rankings starts by noting that influence, at its most fundamental, is a binary relation between persons and the fields in which they are influential. With academic influence, those fields comprise disciplines and subdisciplines. Using a machine-learning approach that performs a statistical document analysis for cross-mentions of people and disciplines on ever increasing swaths of the Web, we are able to get reliable influence scores for people vis-a-vis the disciplines in which they are influential.

Because we are examining such a wide online footprint of these academic influencers, it’s impossible to game our system. Even conventional academic influence measures, such as the h-index, can to some degree be gamed by authors colluding to reference each other’s work or by journals insisting on references to and from certain other journals. Just as the mean of a large sample is immune to a few outliers, so our influence scores capture so much converging evidence that even concerted efforts to change these scores run up against the overwhelming amount of data that would have to be changed.

Once we have reliable influence scores for person-discipline pairs (such as for Einstein in the discipline of physics), it’s straightforward to cumulate these scores to form overall influence scores for people. From there it’s also straightforward to cumulate influence scores for people in a given discipline to form influence scores for disciplinary programs. And by cumulating over influence scores across people and disciplines, it’s possible to form an overall influence score for schools. All these scores can then be used to induce rankings of persons, schools, and disciplinary programs.

This brief description leaves out many details, and readers who desire the details are urged to consult the three articles cited above. One important point worth mentioning is that even though influence is, within our approach, the crucial element for gauging academic excellence, it invites a slight correction factor, and so for some purposes we use not influence as such but rather concentrated influence to characterize the best colleges and universities. The idea behind concentrated influence is that two schools may be equivalent in influential faculty and alumni, but that influence will be more diluted in the bigger school. The smaller school that concentrates the influence more effectively will therefore be the better school.

As final confirmation that our approach to academic excellence through influence is on the right track, ask yourself what it would take for a school to improve in our influence-based rankings? Leaving aside the InfluenceRanking engine, which is continually being fine-tuned and improved for better gauging influence across larger and larger portions of the Web, there are only three practical ways to rise in the AcademicInfluence.com rankings:

  1. Hire more influential faculty (i.e., raid other schools and bring their high-influence talent on board).
  2. Develop more influential home-grown faculty (i.e., enlist promising junior faculty and give them a conducive work environment in which to thrive and make groundbreaking advances).
  3. Train more influential alumni (i.e., admit promising students and inspire them to do groundbreaking work after graduation).

To sum up, our approach to academic excellence through influence (and concentrated influence) involves no amalgam of multiple criteria. Instead, we take a single criterion approach without the arbitrariness of weights applied to multiple criteria that infect all the other rankings. Our approach is algorithmic, objective, non-gameable. In influence we have found the master criterion for doing academic rankings. Best of all, it works! The rankings we get make sense. And even when they are counterintuitive, they are picking up on a real signal of academic influence that hitherto has escaped notice.

Back to Top

Custom College Rankings: “Best” versus “Best for You”

In wrapping up this new philosophy of academic rankings, we want to yield some ground to other ways of ranking colleges and universities. Not surprisingly, we remain convinced that academic excellence is best captured by academic influence as gauged by an objective non-gameable algorithm, namely, our InfluenceRanking engine. On this point we give no ground.

Moreover, we remain convinced that ranking organizations that take a multi-criteria optimization approach to ranking schools are providing a deeply imperfect product. That’s not to say such rankings provide no insight or value. But because they are billed as capturing what is best in higher education, they mislead and invite the three big problems described above, namely, novelty for novelty’s sake, gameability, and subjectivism. The net effect, in our view, is deleterious on education, causing schools to artificially adapt themselves to these rankings.

But once such high stakes are removed from academic rankings, it’s possible to take a more generous attitude toward them. People trying to make sense of higher education have many different questions and many different interests in searching out and sorting through schools, and thereby coming up with rankings. So long as it’s not a ranking organization putting its brand and reputation behind a ranking but people themselves setting the criteria and evaluating schools by them, such a ranking process seems healthy and normal.

So, despite our claim to have discovered in influence a master criterion for college and university rankings, we don’t want to leave the impression that we support one and only one ranking approach or criterion. Consider the following rankings that arise naturally for certain classes of prospective students:

  • Students wanting a second career to improve their earning potential in difficult economic times may focus on return on investment (ROI).
  • Students valuing safety will want a school in a low crime area.
  • Students valuing a small, intimate setting may want a small liberal arts college.
  • Students knowing that employers value degrees within their state may want to stay within a certain radius of their home.
  • Students liking the fan excitement of a school with a vibrant athletic program will want to restrict their attention to such schools.
  • Students who put a premium on social validation will want to know what schools other students prefer (compare our Desirability Index).

So, even though we believe that influence provides the master criterion for unlocking academic excellence, we also believe that students are in their rights to ask lots of questions and consider many different types of rankings and ways of filtering schools and degree programs. To that end we have developed a do-it-yourself ranking tool, namely, our Custom College Rankings (CCR). With the CCR, the issue is not what we at AcademicInfluence.com are touting as the best schools and disciplinary programs but rather what schools and disciplinary programs are best for the person forming the ranking.

At AcademicInfluence.com, we therefore take a two-pronged approach. On the one hand, by evaluating academic influence through our InfluenceRanking engine, we lay out what we regard as the objectively best colleges and universities. On the other hand, by providing our readers with with the Custom College Rankings tool, we give our readers the opportunity to drill down on the things about education that are subjectively best for them. Best in some global objective sense is not necessarily best for you. Academic rankings need to respect that distinction, and we do at AcademicInfluence.com.