The Problematic Influence of Reputation Surveys in College Ranking

The Problematic Influence of Reputation Surveys in College Ranking

College rankings play a huge part in how students make decisions. Annual “best colleges” lists have the power to confer elite status upon universities with top spots, just as they have the power to damage recruitment goals with poor ratings. College rankings exert a significant influence over higher education. We examine that influence, with a particular focus on the dominant and problematic role played by reputation surveys. By contrast, we examine an alternative method of college ranking called InfluenceRanking™, which approaches data collection, analysis, and ranking with greater objectivity, transparency, and scientific grounding.

The College Ranking Space

College rankings have a major impact on how universities are perceived, which in turn impacts how effectively these colleges recruit applicants. According to The Atlantic, inclusion on US News & World Report’s list of Top 25 will generally cause a six to 10% surge in applications.

But what do these rankings measure, and does this measure indicate the excellence of a university? Does it confer elite status or merely confirm it? And how useful is elite status as a predictor of student contentment? What metrics are used to determine elite status? What are the predominant data collection methods used to meet these metrics?

In short, what is the purpose of college ranking today, and how do the leading ranking systems attend to this purpose?

Before investigating that question in greater detail, let’s take just a moment to reflect on the influence that rankings have over the higher education ecosystem, and consider how this influence naturally leads to gaming, manipulation, and outright dishonesty:

  • According to a 2018 study published by Research in Higher Education, there is a significant relationship between rank and presidential salary at public universities.
  • The National Bureau of Economic Research (NBER) reports that:
    [A] less favorable rank leads an institution to accept a greater percentage of its applicants, a smaller percentage of its admitted applicants matriculate, and the resulting entering class is of lower quality, as measured by its average SAT scores.
  • According to the Institution For Higher Education Policy [PDF] (IHEP):
    63 percent of responding institutions were taking strategic, organizational, managerial, or academic actions in response to rankings; only 8 percent indicated that they had taken no action.
  • An article in the Boston Herald from 2014 describes how gaming ranking systems can undermine educational access for students with financial need, explaining that:
    From the mid-1990s to the mid-2000s … to lure students with high GPAs and SAT scores, private four-year schools increased spending on merit-based aid from $1.6 billion to $4.6 billion. Studies show that for every 10 merit-based scholarships, there are four fewer need-based scholarships.
  • Gaming the ranking system is commonplace, says the New York Times, which reports that, for instance… 
    In 2008, Baylor University offered financial rewards to admitted students to retake the SAT in hopes of increasing its average score. Admissions directors say that some colleges delay admission of low-scoring students until January, excluding them from averages for the class admitted in September, while other colleges seek more applications to report a lower percentage of students accepted.
  • According to the Chicago Tribune:
    US News gives its greatest weight to reputational rankings, but how are such rankings determined? The usual way is to ask institutions about each other. A school’s president, chief academic officer and undergraduate admissions dean are supposed to evaluate the undergraduate program at other schools in their institution’s category. It is tempting to coordinate your three votes, and perhaps collude with some of your buddies elsewhere, both to elevate your ranking and downgrade your rivals. One college president proudly proclaimed he does exactly that, using rankings to reward his friends and punish his enemies.
  • In 2009, Clemson University researcher Catherine Watt, while presenting findings about her extensive research on how Clemson could improve its US News & World Report ranking:
    [L]iterally drew gasps from her audience when she revealed that when Clemson administrators fill out US News reputational rankings survey, they rate other universities lower than Clemson across the board. Why not? Reputation accounts for fully 25 percent of a school’s ranking score. Watt’s statement that she was confident that other colleges do the same is perfectly plausible.

What About the Students?

Let’s presume that college rankings were, in their purest form, invented to help students make that critical decision about where to pursue a college degree. How effective are the leading rankers at meeting this mission for the broadest possible cross-section of students? We know that a high rank is great for a university, its reputation, and its bottom line. But what about the aspiring student? Do rankings among the most competitive colleges actually help match a majority of college-bound students with the right university for their needs?

Before making the argument that the leading college rankings are likely not that helpful to the majority of students, we would make the case that all rankings are only useful if you understand what they measure. Moreover, a ranking of colleges is only helpful to you if it measures something you actually care about. This is the premise that helped inform the methodology for InfluenceRanking™ from AcademicInfluence.com. Our unique, machine-learning approach is designed to measure the real-world influence of every person, discipline, and school in the higher education ecosystem, and to provide transparent ways of using this metric to help students match up with excellent schools.

We’ll dig into our methodology and the philosophy behind in just a bit.

Or you can jump to our Methodology now to see how we produce our InfluenceRanking and why we think this is truly the best way to measure academic excellence!

For now, we’ll direct our focus to the rocky college ranking landscape, shaped by Ivy League peaks, poorly-financed private school crevasses, and every kind of collegiate hill, mountain, and valley in between. College rankings are, in general, highly influential. According to The Atlantic, far too many students and families suffer from a shortage of real or useful information on the subject — an indication that too many lack access to, or fail to take advantage of, high school counseling. As a result, students share a collective over-reliance on prominent college ranking services.

Prospective students base very real and consequential decisions on the appearance of schools on such prestigious lists.

Why Is This a Problem?

To their credit, college ranking leaders like US News & World Report incorporate a wide selection of data points into their methodology — 17 factors as of 2020. Moreover, USNWR and their top competitors do take steps to update and refine their calculations to accommodate changing priorities and shifting cultural realities. Likewise, they at least attempt transparency by providing comprehensive information on their methodology and giving users numerous ways to navigate their content.

To the extent that US News & World Report identifies highly selective schools and measures them against one another to determine which universities enjoy the greatest academic prestige, its ranking system is almost certainly effective. Less certain is the extent to which it provides a helpful way forward for the majority of prospective college students, especially in proportion with its enormous influence over the ranking landscape.

According to a study about college choices by Stanford [PDF]:

Many students and families rely on college rankings published by well-known organizations to define quality. The higher the ranking, the logic goes, the better the college must be and vice versa. We find that many of the metrics used in these rankings are weighted arbitrarily and are not accurate indicators of a college’s quality or positive outcomes for students.

There are a number of reasons why this might be the case:

  • Input factors like SAT scores and class rank, as well as output factors like graduation rates and student indebtedness, may reflect selection bias among elite institutions, rather than indicating the excellence of the educational experience.
  • Student-faculty ratios may not effectively measure the number of students who actually have access to small classes.
  • The breakdown of weighted factors appears detailed and precise but is in fact arbitrary and given without empirical basis. Each ranking system weights factors differently, but it’s never clear that this distribution of weighting enhances the accuracy of a given ranking.
  • Many rankers rely on self-reported data from colleges who have a clear motive to provide information that enhances their rankings. This creates a high risk of gaming — a concerted attempt by a college to manipulate and report data in a way that is most likely to enhance its positional ranking.
  • Reputation survey, which is subjective and difficult to verify, plays a major (arguably dominant) role in how rankings are derived.

The problematic dimensions of ranking cited above are common points of critique raised by many researchers, observers and academics. It is neither our goal to discredit the leading college ranking services, nor to compare their effectiveness at providing rankings. But the concerns listed above do demand scrutiny because rankings do play such a major role in shaping student perception.

This requires us to ask how effectively these influential ranking systems perform to the goal of helping students choose the best college for them, which in turn, requires us to acknowledge that some aspects of college ranking may be problematic. But to determine why they may be problematic, let’s first get a sense of how the most influential college rankings derive their findings.

College Ranking Traffic Leaders

Below is a look at traffic analytics for the 10 most widely visited college ranking sites. The list is topped by the usual suspects, those who publish annual national and world university rankings to great acclaim and critique.

Ranking the College Ranking Sites with Google
RankingKeywordsKeywords ranking in Top 3Keywords ranking in Top 10Estimated Visitors­/mo
US News & World Report841,296113,565301,0382,301,411
Times Higher Education572,44224,580125,011766,028
Top Universities72,60213,64924,486511,223
Princeton Review24,9241,4024,30227,747
The Best Colleges7,0291,0702,47627,263
The Best Schools32,7318003,26323,751
Forbes8,9503882,33120,532
Money66,0791433,58816,198
Niche3,2534431,20915,203
The Guardian4166918111,326

In the sheer interest of calculating influence, we recognize that of the top traffic recipients in the college ranking sector, the top three enjoy relative dominance. According to our findings, the ten websites listed here above account for an estimated 3,720,682 visits per month. Of those, 3,578,662 are shared between only US News & World Report, Times Higher Education and QS World University Rankings. This accounts for slightly more than 96 percent of the monthly traffic collectively experienced by the ranking sector’s top ten entities.

When it comes to college rankings: ❝Methodology is everything.❞” – @AcademicInflux

These three leading rankers experience an enormous amount of traffic, and therefore, leverage an outsized influence on the college ranking landscape, and consequently, on the way students choose colleges. Each of these sites offers a methodology, though we can presume that the methodology doesn’t see nearly as much traffic as each year’s newest ranking of the best colleges.

Without overstating our case, methodology is everything. Ostensibly, the rankings produced by these three websites have profound influence over the college industry writ large. But it would be more precise to say that this influence is actually contained in the factors these sites rank for. These are the factors that colleges gravitate to in their quest to infiltrate, scale or remain atop the annual rankings, and the factors that inform student choices, whether students know it or not.

So to the end of illuminating this influence, let’s take a closer look at the leading rankings and their ranking methodologies:

US News & World Report (USNWR)

According to its website, US News & World Report provides more than 100 different types of numerical rankings and lists to help students narrow their college search. Most notable is the impactful annual publication of its Best Colleges List. 2021 marks the list’s 36th year. The leading ranking group in the college space assess 1,452 bachelor’s degree granting institutions in the US using 17 measures of academic quality. These measures are broken down as follows:

USNW Weighted Factors
Student Outcomes (40%)
  • 17.6% | Graduation Rate
  • 4.4% | Retention rate
  • 8% | Graduation Rate Performance
  • 5% | Social Mobility
  • 5%| Graduate Indebtedness
Faculty Resources (20%)
  • 8% | Class Size
  • 7% | Faculty Salary
  • 3% | Faculty with the Highest Degree in Their Fields
  • 1% | Student-Faculty Ratio
  • 1% | Proportion of Full-Time Faculty
Other Dimensions (40%)
  • 20% | Expert Opinion
  • 10% | Financial Resources
  • 7% | Student Excellence
  • 3% | Alumni Giving

Times Higher Education (THE)

Originally publishing in partnership with Quacquarelli Symonds (QS) to produce the THE-QS World University Rankings from 2004 to 2009, THE partnered briefly with Thomson Reuters from 2010 to 2013, before settling with data-provider Elsevier to create their annual rankings.

Times Higher Education relies on self-reporting, noting that:

[I]nstitutions across the globe provide us with information that we scrutinize rigorously to construct the World University Rankings.

Times Higher Education ranks more than 1500 schools in 93 countries using 13 factors, which are broken down as follows:

THE Weighted Factors
Teaching / Learning Environment (30%)
  • 15% | Reputation Survey
  • 4.5% | Staff/Student Ratio
  • 2.25% | Doctorate/Bachelor’s ratio
  • 6% | Doctorates Awarded/Academic Staff Ratio
  • 2.25% | Institutional Income
Research — Volume, Income, & Reputation (30%)
  • 18% | Reputation Survey
  • 6% | Research Income
  • 6% | Research Productivity
Citations / Research Influence (30%) 
International Outlook — Staff, Students, Research (7.5%)
  • 2.5% | Proportion of International Students
  • 2.5% | Proportion of International Staff
  • 2.5% | International Collaboration
Industry Income — Knowledge Transfer (2.5%) 

QS World University Rankings (QS)

Originally partnered with Times Higher Education, and beginning publication in 2004 as the Times Higher Education–QS World University Rankings, Quacquerelli Symonds (QS) has continued to publish its World University Rankings using the methodology originally developed as part of this collaboration. Today, like THE, QS draws its data from a partnership with Elsevier. Contrary to USNWR, which focuses on National Universities in the US, QS is the only comprehensive international ranking with International Ranking Expert Group (IREG) approval. On a global scale, QS is the most widely read college ranker (though it ranks third in traffic in the US, where our analysis is currently focused). QS uses the following 6 metrics to rank colleges:

QSWUR Weighted Factors
Academic Reputation40%
Employer Reputation10%
Faculty/Student Ratio20%
Citations per faculty20%
International Faculty Ratio5%
International Student Ratio5%

Influential Factors Across the Ranking Industry

A closer look at the top three rankers reveals a fascinating finding. Each of the three leading rankers brings its own unique formula to the ranking process. The intended result is a confirmation of certain expected and universal findings — for instance, the relatively unchallenged elite status of the Ivy League schools — as well as the presentation of some findings specific to each ranking system. This latter result speaks to the distinctive value added by each unique formula.

And to be sure, each formula applies special emphasis on certain factors which are distinctive to its approach. For instance, THE places a 30% weight on faculty citations; QS World gives this variable a 20% weight, and USNWR doesn’t include this factor in its rankings.

Faculty/Student Ratio is another factor given a highly variable weighting: USNWR gives it a 1% weight (though it also gives class size an 8% weight). THE gives this variable a 4.5% rating. QS indicates:

[W]e have determined that measuring teacher/student ratios is the most effective proxy metric for teaching quality. It assesses the extent to which institutions are able to provide students with meaningful access to lecturers and tutors, and recognizes that a high number of faculty members per student will reduce the teaching burden on each individual academic.

This accounts for the 20% weighting given to this factor.

These weightings suggest slightly different priorities where each ranking site is concerned. Consequently, the factors highlighted above have widely distributed but not necessarily outsized influence over rankings. Variations in approach suggest that we can learn something slightly different about the college landscape from each site (which is different than suggesting that one ranking is right and the others are therefore wrong).

Elite schools limit admissions to elite students, who tend to come from well-financed families, are more likely to remain and graduate, and develop well-connected personal and professional networks. Selection bias plays a significant role in the ranking data.” – @AcademicInflux

These methodologies do confirm some of the problems cited by critics of college rankings. Specifically, we can see that all three major rankers rely on self-reporting from the colleges themselves to catalogue input and output data. While we’d like to say that, on face value, we trust colleges to report correctly and accurately on this data, it cannot be overlooked that those providing the reporting have strong incentives to spin this data in ways that benefit ranking position.

Moreover, that input and output data is, itself, of questionable merit when determining the excellence of an educational experience. Elite schools limit admissions to elite students, who tend to come from well-financed families, are more likely to remain and graduate, and develop well-connected personal and professional networks. Selection bias plays a significant role in the ranking data, and may therefore tell us little about what these elite students actually experience while attending elite institutions. This is to say nothing of how small the population of elite students actually is. More on that in the next section.

The next section focuses on the factor that emerges as the most significant tract of online real estate shared between the top three rankers: reputation survey.

The Dominant Influence of Reputation Survey

Reputation survey data is easily the most subjective and unverifiable measure for what constitutes academic excellence, compatibility, desirability, affordability, influence, and a host of other factors that students really care about” – @AcademicInflux

While the factors identified above may be problematic, they are used with variable influence to produce various rankings. There is one surprising factor, however, that overshadows the influence of all others — Reputation Surveys.

Why is this surprising? Because reputation survey data is easily the most subjective and unverifiable measure for what constitutes academic excellence, compatibility, desirability, affordability, influence, and a host of other factors that students really care about.

Reputation Survey is drawn from survey distribution to academics and leaders in the sphere of higher education. A study from Stanford [PDF] calls reputation survey data black box items, noting that:

there is little transparency to the way they are calculated. The inner workings of these metrics are mysterious and poorly understood except by a few in the know. Whereas figures like graduation rates, SAT scores, and acceptance rates are public knowledge, the survey participants and results that lead to these black box metrics are not identified or reported.

Bear this opacity in mind as we take a closer look at the role played by reputation surveys in college rankings.

US News & World Report, the uncontested traffic leader in the college ranking space, gives a 20% weighting to Expert Opinion. USNWR conducts an annual peer assessment in which:

[T]op academics — presidents, provosts and deans of admissions — rate the academic quality of peer institutions with which they are familiar on a scale of 1 (marginal) to 5 (distinguished). We take a two-year weighted average of the ratings. The 2021 Best Colleges ranking factors in scores from both 2020 and 2019.

On the surface, other major factors seem to be weighted at equal or greater value. But a closer look at these variables shows that these are broken out into subordinate factors. While Outcomes are weighted 40%, this figure is composed of graduation (17.6%) and retention (4.4%) (22%); graduation rate performance (8%); social mobility (5%) and; new this year, graduate indebtedness (5%).

Likewise, while USNWR gives a 20% weighting to Faculty Resources, this is broken out into class size (8%), faculty salary (7%), faculty with the highest degree in their fields (3%), student-faculty ratio (1%) and proportion of faculty who are full time (1%).

There is no such breakdown in the weighting of Expert Opinion. This makes the USNWR peer assessment method the most influential factor in its ranking methodology.

A similar pattern and approach emerges when we pull back the curtain on Times Higher Education.

Times Higher Education gives a 30% weighting to Teaching (the learning environment). This factor is broken out into five areas: Reputation Survey (15%), staff-to-student ratio (4.5%), doctorate-to-bachelor’s ratio (2.25%), doctorates-awarded-to-academic-staff-ratio (6%), and institutional income (2.25%).

Times Higher Education also gives a 30% weighting to Research (volume, income, and reputation). This is broken out into three areas: Reputation Survey (18%), Research Income (6%), and Research Productivity (6%).

This means the Reputation Survey is weighted at 33% in the THE ranking of colleges. According to THE, its reputation survey:

examined the perceived prestige of institutions in teaching and research. The responses were statistically representative of the geographical and subject mix of academics globally. The 2020 data are combined with the results of the 2019 survey, giving more than 22,000 responses.

QS World Rankings explicitly and intentionally places the greatest emphasis on data collected through survey in its rankings. QS notes:

The highest weighting of any metric is allotted to an institution’s Academic Reputation score. Based on our Academic Survey, it collates the expert opinions of over 100,000 individuals in the higher education space regarding teaching and research quality at the world’s universities. In doing so, it has grown to become the world’s largest survey of academic opinion, and, in terms of size and scope, is an unparalleled means of measuring sentiment in the academic community.
Survey-based data is the single most influential factor in college ranking.” – @AcademicInflux

In addition to its dominant 40% weighting of Academic Reputation, QS also places a 10% weight on Employer Reputation, bringing its total reliance on survey data to 50%.

If we simply take the average weighting from all three rankers, we can see that, at 34%, survey-based data is the single most influential factor in college ranking.

What Does This Mean for Prospective College Students?

Now that we’ve highlighted the enormous influence that reputation survey levies over college ranking, we will reiterate that this discussion is not itself a critique of any specific methodology. This discussion is instead driven by a question of just how effectively the most influential factors in college ranking are actually helping students. More specifically, how effective is the singular influence of reputation survey in aiding students as they search for college?

The best college searches are those that help applicants discover themselves on their way to becoming adults.” – @JSelingo

We would argue that it is not very effective in doing so.

This observation is made without negative implication. The value of a specific ranking is in the eye of the beholder. If the reputation of your college among academics is the most important factor in your decision, you will absolutely learn something of value from the rankings offered by USNWR, THE, and QS. There is a justifiable reason that these sites generate more than 3.5 million visits a month. Institutional prestige does matter for many students.

But what is less immediately clear is how reliable reputation surveys are at helping most students find the best fit. We recently had the honor of interviewing admissions expert and author of the recently-published Who Gets In and Why, Jeffrey Selingo. Among the many insights that Mr. Selingo shared — and which you can investigate more deeply through our interview and our review of his new book — one compelling observation rises to the top.

Mr. Selingo observes:

The best college searches are those that help applicants discover themselves on their way to becoming adults.

This is an important point because, with more than 10.2 million college applications submitted annually (at a rate of roughly 6.8 applications per student), a vast majority of students will discover themselves on their way to somewhere that isn’t an Ivy League school. The Pew Research Center finds that extremely competitive schools amounted to 3.4% of all the colleges and universities in this analysis, and they accounted for just 4.1% of total student enrollment.

Quick and dirty math says that if 1.5 million students apply to 1,364 national universities in 2021, 61,500 of these students will get seats at 46 extremely competitive universities. That leaves 1,438,500 applicants and 1,318 schools in search of one another. Without exaggeration, the vast majority of the higher education ecosystem exists outside of the ultra-competitive HigherEd stratosphere.

Without exaggeration, the vast majority of the higher education ecosystem exists outside of the ultra-competitive HigherEd stratosphere.” – @AcademicInflux

So as elite colleges, top students, and leading academics eagerly anticipate the publication of a new ranking each year, a vast majority of students must necessarily look elsewhere in their search, even if the USNWR’s annual ranking of best schools is their first stop.

This is not to suggest that the majority of students lack opportunities to access higher education. Quite the contrary. Reputation simply isn’t the best indicator of what most college students need, nor from where they derive the greatest satisfaction in their educational experience. To wit, Pew Research Center notes that more than half of the schools in our sample (53.3%) admitted two-thirds or more of their applicants in 2017, including such well-known names as St. John’s University in New York (67.7%), Virginia Tech (70.1%), Quinnipiac University (73.9%), the University of Missouri at Columbia (78.1%) and George Mason University (81.3%).

Of the schools above, only Virginia Tech cracks USNWR’s Top 100. But these schools — which are among the few to boast admission rates capable of staying on pace with the 70% growth in annual applications submitted over just the last 15 years — are also respected by prospective employers, beloved by alumni, and distinguished by influential professors, noteworthy degree programs, and, in some cases, excellent athletic programs. In other words, opportunity abounds for students well outside of the elite and highly-selective confines of the USNWR Top 50.

So returning to Selingo’s insight, how effective are the dominant ranking systems at helping applicants discover themselves on the way to becoming adults? The simple answer is, they aren’t really designed to do that. The outsized influence of reputation survey means that the most consequential data informing these rankings merely illustrate how universities are perceived by those from within the insular and self-perpetuating world of academic prestige.

For students and schools outside of this elite world — and to be sure, this is the majority of us — reputation amongst academic elites has precious little to do with real-world outcomes. And yet, looking at these rankings based on raw data, reputation overshadows questions of personal compatibility, actual cost, post-graduate opportunities, and any meaningful measures of educational value.

For students and schools outside of this elite world — and to be sure, this is the majority of us — reputation amongst academic elites has precious little to do with real-world outcomes.” – @AcademicInflux

There was a time, from which we are now decades removed, when college was designed for a select percentage of the population. That population was perceived as being of the necessary academic pedigree for higher education (though of course, this perception hinged on vast racial, gendered, and socioeconomic inequalities).

This framework meant that the population of student’s applying for colleges was much lower and, in fact, the percentage of applying students who were admitted to each elite institution was dramatically higher than it is presently. Today, the college population has ballooned to the extent that an extremely wide range of occupations and careers hold the undergraduate college degree as a basic threshold. For the sake of simplifying the long history of college and its descent into commoditization — higher education is a transactional arrangement for many students (i.e. Pay your tuition; get a degree; find a job).

While reputation survey may well have informed a broad proportion of applicants at a time when most prospective college students were seen as academically elite (as well as well-financed), it is likely substantially less useful to the medium-income student from an average-performing high school who views a basic college degree from an accredited institution as a natural path to a well-paying job. This student may not have access to a college which is most remarkable for its sterling reputation amongst other academics.

So it raises the question — as influential as reputation surveys are in the college ranking ecosystem, is this information helpful to most students? Is it even possible that this information is harmful to students — that it creates a false impression of failure and shortcoming for the millions of students who simply won’t be able to access the few thousand seats that open annually at the nation’s top schools?

As we noted early in our discussion, college rankings have a major impact on how universities are perceived. Ironically, these rankings may well be most directly informed by … how these universities are already perceived.

Finding a Ranking Metric That Actually Measures Value

If you seek a measure of a school’s reputation within academics, the three leaders in the US college ranking space provide this. And this measure includes consideration of what the Stanford study calls traditional inputs (SAT scores, admissions selectivity, student-to-faculty ratio) and outputs (graduation rate, post-graduate earnings, post-graduate debt), which means that these ranking systems also provide useful statistical information on individual schools.

But is this sufficiently valuable to the nearly 1.5 million new students seeking full-time admission in a given semester?

According to a study by the Forum on Public Policy [PDF], What has been lacking thus far are measures of the value added by these schools. In other words, we need a way of determining how much additional knowledge has been gained by students in a given educational program in comparison with other programs.

Statistical data is informative but impersonal. Reputation survey is personal but unreliable.

This concern speaks precisely to the reason that we created the InfluenceRanking engine that powers Academic Influence. We were moved by a question largely unmet in college ranking — how to add personal value while adhering to reliable and empirical measures of excellence? The answer to this question is influence.

Why Influence?

Influence is the ability to impact how others think, feel, and act. Influence is central to leadership, expertise, and authority. It is a key trait of successful individuals and institutions. We believe that influence is a key to understanding educational excellence — not simply what makes a school excellent, but who makes a school excellent, and how they do it.

Of course, all ranking systems set out with the goal of measuring excellence. What’s more, each student has a very personal way of understanding institutional excellence — access to great professors and accomplished faculty; singular excellence in a specific discipline; a strong international reputation, etc. Students are also highly conscious of factors like educational outcomes, career prospects, and earning potential.

Indeed, there are many dimensions of institutional excellence that demand consideration. Our findings suggest, however, that influence, measured properly, captures all of these dimensions and more. Influence is a wider and more nuanced window into excellence. Perhaps even more importantly, influence corrects the most problematic aspects of college ranking, most particularly the outsized influence of reputation survey.

There are a few reasons why we think influence is a better way to measure excellence:

Influence is Objective
This school ranking method does not rely on self-reporting, internal data, or peer surveys. In other words, measurement of influence cannot be manipulated, nor can this system be easily gamed.
Influence is Dynamic
The measurable influence of individuals and institutions can change over time. Sometimes, it can change quickly and dramatically. As influencers and institutions innovate, achieve, write, publish, and research, they have the capacity to organically improve their own InfluenceRanking. Influence responds directly to real events and achievements, which means InfluenceRanking is subject to meaningful and measurable change, rather than mere confirmation of institutional reputation.
Influence is Quality Controlled
While the influence metric is engineered through machine learning, a team of data scientists and academics remains focused on better understanding and leveraging this metric all the time. That means that the algorithm behind InfluenceRanking is in a state of perpetual refinement to the end of producing data that is ever more reliable, accurate, and meaningful over time.
Influence is Nuanced
This metric offers useful ways of exploring a complex subject. Influence captures an intersection of practical, social, and internal experiences. This layered metric presents a more personal way of looking at prospective colleges, in contrast to impersonal quantitative metrics which are highly vulnerable to selection bias such as graduation and retention rate.
Influence Has No Agenda
The measurement of influence is informed simply and summarily by the catalogue of human achievements. While hierarchies persist in higher education — especially through insular and self-fulfilling methods of data collection like reputation survey — influence offers a unique window into excellence that exposes, rather than reinforces, ingrained gendered, racial, political, and socioeconomic inequalities.

Each InfluenceRanking is a dynamic and objective ranking produced by a combination of machine learning and human quality control.

Check out our methodology to see exactly how we measure influence!

InfluenceRanking Adds More Value

More than any other college ranking metric — selectivity, graduation rate, endowment, even basic academic citation — influence conveys profound, human truths about a given institution and the people who comprise it.” – @AcademicInflux

Arguably, Reputation Survey is each college ranking leader’s way of adding value to what is otherwise largely a calculation of raw statistics. But as we have argued above, this approach would seem not to add sufficient value to the ranking process where the majority of prospective students are concerned. In fact, by relying so heavily on this data collection method, as opposed to a factor such as measurable influence, many leading college ranking services:

Overlook the profound impact that excellent professors and superstar students have had in shaping institutions;

Miss the opportunity to measure real-world achievements by current and past faculty members;

Deprive prospective students of a more personalized college selection experience;

Rely overwhelmingly on data which is reported directly by colleges, and is therefore vulnerable to bias, error, and manipulation; and

Feed into ingrained ranking hierarchies which often overlap with racial, gendered, and socioeconomic inequalities.

More than any other college ranking metric — selectivity, graduation rate, endowment, even basic academic citation — influence conveys profound, human truths about a given institution and the people who comprise it.

The influence metric gives us direct insight into:

  • Research
  • Publication
  • Innovation
  • Visibility
  • Diversity
  • Impact
  • Achievement
  • And much more…

These topics also describe what we hope to encounter as college students, what we aspire to advance as educators, and what we plan to achieve as graduates. Where you go to college is a huge decision. How you make this huge decision is a natural extension of influence — in particular, how you wish to receive influence as well as your desire to create influence — to gain the skills, credentials, and qualifications to be willingly received as an influencer by others. On an institutional level, the ability of colleges and universities to host, facilitate, and stimulate these outcomes is both a meaningful and profoundly human way of conveying their excellence.

In fact, this speaks to a particularly unique distinction between InfluenceRanking and traditional ranking metrics. We have clearly established that most of these metrics are highly vulnerable to gaming and manipulation. The same is not true of InfluenceRanking, but that doesn’t mean these rankings can’t be altered or improved over time. Based on the way we measure influence, there are a few ways that a school can consciously and purposefully move up the ranking ladder.

While our InfluenceRanking engine is continually being fine-tuned and improved in order to better gauge influence across larger and larger portions of the Web, we are aware of only three practical ways to rise in the AcademicInfluence.com rankings:

  • Hire more influential faculty (i.e., raid other schools and bring their high-influence talent on board).
  • Develop more influential home-grown faculty (i.e., enlist promising junior faculty and give them a conducive work environment in which to thrive and make groundbreaking advances).
  • Train more influential alumni (i.e., admit promising students and inspire them to do groundbreaking work after graduation).

In other words, the only way to game the system, as it were, is to actually add real value to the educational experience and the opportunities available to students. Improving in InfluenceRanking requires actual real-world improvement.

Conclusion

We will conclude by offering credit where credit is due. Leading rankers like USNWR regularly update their methodologies in the interests of improving outcomes (or at least producing slightly different outcomes) each year. Perhaps over time, these rankers will choose to provide greater transparency in their survey methods, or to reduce their reliance on potentially unreliable data.

Our InfluenceRanking, too, remains in a similar state of perpetual improvement.

It isn’t merely our goal to rank according to influence, but to gain a better, fuller, and more nuanced understanding of influence, what it is, how it works, and why it matters. Because we are data scientists, we will allow our findings to illuminate these topics over time. Indeed, the most exciting dimension of the influence metric is that it exists in a state of constant evolution, that major events, catalyzing developments, and cultural shifts will be reflected in this metric and in our findings. There is still so much we can learn from the data, and so much that influence can teach us about our schools and ourselves.

Check out these functions on our site and let us know how AcademicInfluence.com is proving influential in your quest for knowledge.

Learn more about the college admissions process with a look at our complete guide.

Or get valuable study tips, advice on adjusting to campus life, and much more at our student resource homepage.

Do you have a question about this topic? Ask it here