Distortions of Business School Rankings | Interview with Anjani Jain

Distortions of Business School Rankings | Interview with Anjani Jain

We met with Dr. Anjani Jain to discuss discrepancies in business school rankings. Enjoy!

Professor Anjani Jain of Yale University realized that current business school rankings were grossly oversimplified, and has worked to refine those rankings. He also discovered that the Bloomberg Businessweek business school rankings had discrepancies in the raw data they published. Professor Jain determined that this discrepancy was either a result of their failure to normalize the crowdsourced data or some post-calculation manipulation. The second theory was more troubling considering the discrepancy had no clear explanation. He found that the ranking system Bloomberg Businessweek uses is much more complicated than advertised given the data they release. Jain feels that school rankings often distort the public’s view of what a quality institution is, often disregarding schools that are not ranked in the top 30. Follow along as Deputy Dean for Academic Programs and Professor in the Practice of Management at Yale School of Management, Dr. Anjani Jain, talks with Dr. Jed Macosko, academic director of AcademicInfluence.com and professor of physics at Wake Forest University.

The number of academic institutions of extremely high caliber of great excellence is a lot larger than what any of these rankings tend to suggest.” – Anjani Jain

Considering a degree in business? Visit Our Business Page, where you’ll find the best business colleges and universities, career information, interviews with top businesspeople, influential scholars in the field of business, great books, a history of the discipline, online MBA programs, and more.

Interview with Professor, Dr. Anjani Jain


Interview Transcript

(Editor’s Note: The following transcript has been lightly edited to improve clarity.)

0:00:18.2Bloomberg Businessweek ranking of business schools

Jed Macosko: Hi, I’m Dr. Jed Macosko at Wake Forest University and Academic Influence. And today we have Professor Anjani Jain who is in the Business School at Yale University. And so we wanted to talk to you a little bit about how you have come across different rankings of business schools and what you found out about them, because I read about this in the news, and I wanted to find out a little more from you.

Anjani Jain: Great, good to meet you in this video format, Jed. I read a little bit of your blog and it seems very interesting and I’m honored to be invited to this conversation. I should mention that I have long felt that the whole rankings enterprise is fraught with the possibility of distortions. Now, I understand that people like linear orders, one, two, three, four. And so it’s perhaps inevitable that these rankings will continue to exist. But just on the surface of it, to take the multi-dimensional complexity of a university experience, let’s say from the student’s perspective, and reduce it to a linear order, I think you’re then exercising great over-simplification. And then to take, let’s suppose one could even collapse all these multi-dimensional elements into one cardinal measure of excellence, then to further reduce it to an ordinal rank, is to ignore the fact that there are so many great universities in the country, let alone the rest of the world. And so it transmits a great deal of oversimplified signal to readers of these rankings and I have long worried about that. But I should acknowledge that these linear orders will continue to exist because people like to think in those terms.

Jed: So you’ve seen the problem for a long time, you know that they’re gonna still exist, of course, I really wanna ask you, well, how could we make it as good as possible, but first, I just wanna hear the backstory on a particular ranking you discovered and tell us more about that.

Anjani: Yes, this was the Bloomberg Businessweek ranking of business schools, which I should note, Jed, was the very first ranking of business schools to become public in the media. And this goes back, I think to 1988, and it was celebrated by its authors as a voice of the customer. So what they did was to constitute the rank order, based on surveys of two groups of stakeholders. One was the current students, going through MBA programs, the second were the employers of these graduates. And there was a certain logic to it. And they purported to weigh the two factors, the student survey and the employer survey in equal measure 50-50 to compute the overall rank. Over time, this ranking changed its methodology several times. So fast-forward to the current ranking which came out about a couple of months ago now, or about about a month and half ago. They have five factors that comprise the computation. And one interesting feature of this ranking is that instead of coming up with your own weights to assign to these five factors, they crowdsource the weights. In other words, they ask their stakeholders, their students, the alumni, the employers, etcetera, what the rates are to be, and through some process of averaging, they come up with these crowdsourced weights for the five factors, which I thought was certainly better than global people in the magazine coming up with their weights.

Now, a slight digression, one computation I have done with these rankings, for my own curiosity, is the following. It turns out that if you look at the entire list of schools they rank in this case, 84 of them, the pattern of variances on these five factors actually changes quite a bit. So to make it concrete, if you compute the standard deviation on these course, for the top 20 schools, the next 20 schools, the next 20 schools, you will get different variances. Now, what that does is that the effective ranking of, say, the top 20 schools is determined by effective weights that are different from the weights they apply to the whole list. So it’s sort of an interesting statistical conundrum, and this is where my background in operations research perhaps comes in handy, I can compute for any subset, let’s say the top 20 schools in their published ranking, I can compute the effective weights that determine that order if I have access to the scores that they publish on these five factors. So what Bloomberg Businessweek does is to publish the data in what they call the normalized form. So on each of these factors, they normalize the scale from 0 to 100 with just a linear stretching or compression of the raw data. And based on those scores, one can actually determine the effective weights.

So as I was doing this exercise for the top 20, it wasn’t quite adding up, I did it for the top 50, it still wasn’t adding up. And so then, I included the entire list of 84 schools to determine what the effective weights had to have been to replicate the ranking, and it turned out that those weights were dramatically different from the weights they claim to have used through this crowdsourcing process. So that got me puzzled and I wrote to the editors to ask, "What am I missing?" And what I got back was what I thought was a very unresponsive answer, and that led to my digging deeper and it boiled down to the following... I am sorry to be long-winded.

It boiled down to the following, that there’s a simple yes, no question, and depending on the answer to that yes, no question, two possibilities exist. So one possibility is, they did not normalize their old data before they applied their crowdsourced weights. Well, if that’s what happened, then of course, the weights that I computed, which are greatly different from the crowdsourced weights... Those weights determine the order, in other words, by the failure to normalize the data before they applied the weights, they have greatly distorted the weights that they tout as outsourced weights. And if that’s what happened, they can still fix it, they can still go back, normalize the data and publish, an amended set of rankings.

Which I actually gave them. Here is what the resulting rankings would be if you used your own normalized data and that’s a computation anyone can do because a normalized data is published. All you do is apply their vector of crowdsourced weights and you have a composite score based on which you rank. That’s one possibility. The second possibility is more troubling, so that’s if they did normalize the data before applying the weights. In that case, the discrepancy within the order order of schools, the ranking they published, and what you would get by applying their weights to their normalized data, which is public. That discrepancy is unexplainable, and it can be attributed to some mysterious post-calculation automation.

[laughter]

In order to get the list they publish, and I noted that if you compute the ranking, the way one would remain true to the crowdsourced weights by using normalized data, then the order that results is a little topsy turvy compared to what people might assume the order to be for instance, The Wharton School ends up being ranked 28, MIT ends up being ranked 21, UT Dallas goes up to rank 9, Emory goes up to rank 10, and I can’t know what actually happened because they refused to answer this, a simple yes, no question. The question was, "Did you normalize the raw data before you applied the weights to it?" And their answer was, "This is a deliberately private part of the methodology to thwart people like you from reverse engineering the ranking and gaining an unfair advantage and.

Jed: Okay. Well, they have spoken. [laughter] So it doesn’t look like they’re going to give you anything more.

Anjani: No.

Jed: You’re not... Yeah, okay.

Anjani: Yeah, so I think they have now closed ranks behind the stands and they have told me that they are not going to answer any further questions, they also did not respond to some of the journalists who reached out.

Jed: Yeah, I saw that. So well, it’s their ranking, but I think that you’ve raised a good point that it’s not as simple as they want to portray it, it’s not just crowdsourced weightings on top of published five categories. It’s not that. That’s not what it is.

Anjani: No.

Jed: And then they should not advertise it as that, because that’s not what it is. So whatever it is, they can do whatever they want with it, but they should not advertise it. At some level, this is selling their brand and they are using false advertising to sell their brand, which is probably not fair. Good, but nothing will happen. Right.

In your opinion, there’s gonna be no fallout from this, it’s just you and a few journalists?

Anjani: It’s hard for me to tell, Jed, because I think a number of deans are troubled. Some deans of business schools are troubled by this. And you know it’s an annual ranking so it won’t be too much time before they come back and request reams of data as they always do. From schools and go through these survey exercises. One other thing, to add to what I said previously, is that I also went back to two of their previous rankings. They did not publish the rankings due to the pandemic in 2020, but I looked at the 2019 and 2018 rankings, and I found exactly the same discrepancies between what they published and what you would get based on applying their methodology to their data. But, here is perhaps something interesting, the extent to which the so called top tier of this ranking gets churned by applying their weights to their data, has grown over time. In other words the contributions you would see in, the let’s say top 20 or 30 have become much larger as we get to 2021. And it reminded me of the Chinese saying that, "When you ride a tiger, it becomes very difficult to dismount."

Jed: Very true. Well, that’s very good. I’d actually like to end on that, except for that, I still want you to say, "What would you do if you were in charge of all rankings everywhere in order to make it so that people can still have the rankings they want?"

Anjani: Yeah.

Jed: But then it will do the least amount of damage.

Anjani: Yeah.

Jed: Even if they’re correct, like let’s say Businessweek Bloomberg fixes this problem and everybody has clear transparent rankings. Is there anything else that needs to be done besides that?

Anjani: So the answer, at least a partial answer Jed, to your question. The best answer I have seen to your question was actually given by a colleague of mine right here at Yale, Professor Andrew Metrick , who’s a professor of finance. And he with a couple of co-authors published a paper quite some time ago, looking at undergraduate college rankings. And they made the following observations. That when students who have multiple choices for college make that choice, economists call it revealed preference. These revealed preference, if they have the choice, between A, B and C, which school they go to. This revealed choice, revealed preference carries a lot of information, but because one could argue that, all of the past reputation of these schools, A, B and C plus, all of the future prospects of graduating from A, B or C collapsed into this choice building with their feet. So, what they did was to look at... They were able to get data on revealed preferences of students who had these multiple choices and construct a ranking based on these choices.

And that methodology to me seems to be the most appropriate. If you’re going to reduce this multi-dimensional complexity school in your order, I think that’s perhaps the way I would recommend, it surprises me that, no media organization has picked up on this idea. The added point I’ll hasten to interject here is the following perception that I have. That just in the US alone, the number of academic institutions of extremely high caliber of great excellence is a lot larger than what any of these rankings will tend to suggest. And what I worry about is that these rankings sort of converge or direct people’s attention to this pecking order. And they tend to, I think, do this in a way that is very distorting. I did my graduate work at a public institution, UCLA. And I know from my own personal experience, I worked with and extraordinary group of faculty and scholars.

And I had a good fortune to interact with. And that fact gets obscured. And I could say this about large number of institutions, public and private that we’re not figuring in the top 20, 30 of any of these rankings. But, they divert our attention to these rankings, divert our attention and eventually, I think it affects the ability of these institutions to attract faculty, to attract resources, funding, etcetera. Donors get influenced by these rankings, not just perspective students, employers get influenced by them. So, on the whole, I continue to have our misgivings about the exercise of reducing all of these to a linear order.

0:18:13.2Sign off

Jed: Well, thank you so much. It has been truly wonderful to get to know you, Professor Jain, and your perspective. And I look forward to interacting with you more in the future.

Anjani: Thank you for giving me the chance.

Jed: Great.

Watch other fascinating interviews with top influencers in every field, such as Nancy Scheper-Hughes and Jeffrey Stake.

See our Resources Guide for additional tips on studying, surviving college, starting your job search, and more.

Do you have a question about this topic? Ask it here