What the Future Holds for Artificial Intelligence | Interview with Erik Larson
We met with Erik Larson to discuss his new book, what’s required for artificial intelligence to become a reality, and much more. Enjoy!
Author Erik Larson discusses his book The Myth of Artificial Intelligence, and not only clears the air surrounding superintelligence, but also sheds light on the empty claims made about general intelligence. After working in the field alongside influential computer scientists such as Raymond Kurzweil, Larson discovered that without a huge breakthrough, artificial intelligence is likely unachievable. Erik Larson’s book walks the reader through the advanced AI systems we use today, as well as what the future holds for developing general intelligence. Follow along as tech entrepreneur and author, Erik Larson talks with Dr. Jed Macosko, academic director of AcademicInfluence.com and professor of physics at Wake Forest University.
Check out our article on the Most Influential Computer Scientists Today to find out who’s leading the field.
And if you’re interested in pursuing an academic career in computer science, take a look at the following:
- How to Major in Computer Science
- What Can I Do With a Master’s Degree in Computer Science?
- The Most Influential Schools in Computer Science
If you want to take a deeper dive into the fascinating topic of artificial intelligence, check out our article Controversial Topic: Artifical Intelligence.
Interview with author, Erik Larson
0:00:01.0 Erik Larson: How are we drawing these conclusions? I’m right here doing this work, and we have no clue how to build systems that solve the problems that they say are imminent, that are right around the corner.
0:00:18.6 Jed Macosko: Hi, this is Dr. Jed Macosko at Academic Influence in Wake Forest University, and today we have Erik Larson, who’s just written an incredibly powerful insightful book on The Myth of Artificial Intelligence. And I wanna hear more about this book. So Erik, tell me how you got started writing this book, and what it’s all about.
0:00:39.8 EL: Well, I... In terms of getting started I had been thinking about this going all the way back to college days, and then I think what prompted it was when I was actually working in the field I kept hearing the futurists talk, from Kurzweil and other people, who should know better. Kurzweil himself is very technical, and is... He won the Medal of Technology for helping develop the voice recognition systems and so on. And so I took... And there was this mismatch... So I started in college and had these philosophical questions about AI, and then when I was working actually as a computer scientist, or what broadly speaking, an AI scientist, I saw this huge mismatch between the stuff that we were doing... And my field is Natural Language processing, so that’s directly relevant to the Turing test. So understanding language is what I work on, natural language like English, French, and so on. And what the futurists were talking about in the media and everywhere else, and I was thinking like, "How are we drawing these conclusions? I’m right here doing this work and we have no clue how to build systems that solve the problems that they say are imminent, that are right around the corner.
0:02:03.2 EL: Not only do they seem not around the corner, there’s a reasonable skepticism that we’re ever going to find a solution short of a major conceptual invention that we can’t foresee." And if it’s something that... A conceptual invention almost by nature you can’t foresee, and if that’s what we’re looking at, then all of the predictive vocabulary and speech that these guys are used... Bostrom and so on. There’s a whole constellation of people that are perpetually decade by decade declaring that AI is right around the corner. If what we need is a conceptual revolution or an invention of some radical nature, and it, in fact, might even be not feasible at the limit. We may just find out that Turing machines don’t produce certain kinds of intelligent behavior that human minds do. But I couldn’t understand what they were talking about and from where are they drawing their data? [chuckle] From what... [chuckle] So I wanted to write a book that cleared the air, because I thought there was just a ton of confusion, and I think some of it is potentially destabilizing, right?
0:03:12.2 EL: So if you have people declaring that humans are soon to be replaced by artificial intelligence that are far superior, there’s a... It creates a real disincentive for us to do much to fix things in our own society. So I think the message, if it was forced upon us because it was a scientific truth, that would be one thing, but if people are just speculating about futurism for whatever motives you wanna attribute, or they could be true believers, or they maybe have a market incentive to do that because they own big shares in Apple, or Google, or something and they want, you know... But if people are just speculating I think we needed a better discussion, so I wrote the book. Yeah.
0:03:56.9 JM: Wow, that is really cool. So basically what you’re saying is, unless a robot comes back in time from the future, like in Terminator, and we get a piece of the arm and we look inside and see the chips, we’re not gonna do this in any time soon. I mean, that’s what we need, is some breakthrough like that, and that’s just science fiction. Is that what you’re saying?
0:04:17.9 EL: Yeah. So a lot of the air gets sucked out of the room when you talk about AI on the issue of consciousness. So people will say, "Can we build a conscious machine?" And so a lot... Just so much ink is spilled over this issue of whether computational systems can have minds in the sense of consciousness, but the real issue for AI scientists like myself is, can they do intelligent things that are not narrow like play chess, or play Go, or play, even Jeopardy! Like can they actually exhibit a general intelligence, what people like to describe now as an artificial general intelligence or an AGI? That’s the question. Whether they’re conscious or not is kind of a sideshow issue that we can leave in philosophy class, what we can’t leave in philosophy class, as an AI scientist, is what sort of intelligent behaviours can be programmed, right? So I wrote the book focusing specifically on something called inference, which is given what I know and what I see, what can I conclude? And if you don’t have the inference, you don’t have an intelligent system, whether it’s a person, or a robot, no inference, no intelligence. So the limitations on inference in computational systems directly translate to the question about the future of AI. So yeah.
0:05:46.5 JM: Yeah, so you don’t really get into that whole Terminator thing in your book, you just say... You leave it kinda like, "The only way that we could ever experience true artificial intelligence that has this inference ability is if we got some breakthrough, be it a Terminator arm, be it something else, that’s what we really need, right?" Is that what you’re saying?
0:06:09.8 EL: Yeah, I mean essentially, if you... Any foreseeable extension of the capabilities that we currently have do not result in general intelligence, just point blank, they just don’t. So...
0:06:25.7 JM: So we’d need a huge leap forward.
0:06:27.3 EL: Yeah, there has to be something that happens, and so there’s two possibilities that we have our kind of Einsteinian moment where somebody realizes, "Oh, the reason that we have bad measurements at 90% of the speed of light is because we had to curve space time." I mean, you know physicist, right? And nobody thought about that. We were dealing with a Euclidean sort of space and then somebody said, "No, we have to use this lobachevskian space." It’s like, "Oh. Well, nobody... " That made certain measurements possible, gave us a richer picture. In the same sense, AI doesn’t need people preaching about smart machines, they need somebody to figure out the problem in this fundamental, conceptual, innovative sense or we need to start admitting that we overshot the goal and there might be fundamental differences between minds and machines, it’s just a fact of... A fact of nature, a fact of life that we have differences, and we can join the two together in future development efforts, so...
0:07:29.9 JM: And of those two choices, you think it’s more likely that there is a fundamental difference between mind and machine rather than you think that there’s a high likelihood that we’re gonna have this Einsteinian moment where we get the arm of a terminator robot and we see the chip inside and we’re like, "Oh, that’s what we need to do." That first option of having the Einstein moment is not really likely in your book, is that what you’re thinking?
0:07:57.0 EL: Well, you don’t... You can’t... You don’t have a prior probability to put on it, so you can’t assess the probability of a conceptual innovate... Like you don’t know... We don’t know sort of what is possible, sort of an impossibility proof itself, so if somebody could formalize the problem and show that it’s impossible, like in mathematical logic or something, then that’d be one thing, but as far as I know, we don’t have a proof. So we have to leave the door open. And it’s difficult to assess probabilities when you don’t have a prior... You don’t... On the basis of what? And so... But I think just intuitively, if you want sort of... So I give a disjunctive conclusion that it’s either you have to wait for a miracle effectively, or it’s impossible. That’s what we’re looking at.
0:08:43.3 JM: And that’s... That’s what your book says? That’s where you come to...
0:08:49.2 EL: Yeah, yeah. It’s either one of those. Yeah and...
0:08:50.1 JM: Okay.
0:08:50.5 EL: That’s as far as I could stretch the argument without myself introducing opinions, which is what I was complaining, the futurists are doing... So I wanted to make my argument very, very supported by exactly what we know about the state of the art and foreseeable extensions of it, so... But if you want my own opinion, which sort of falls a little bit outside the scope of the book, but... Yeah, if you ask me, is my laptop going to sprout a mind or something? Or is some server farm with super computational capacities and new algorithms and so on, that that’s gonna somehow exhibit the characteristics of the human person? I think that’s just incredibly bizarre thought that I don’t think we should give much credence to frankly, so my own opinion is, yeah, it’s just not the same thing. It’s apples and oranges. It’s like worrying that I stop putting bagels in my toaster. I’m not particularly worried that it’s going to start griping and feeling... A computer after all is an artifact is the point, and artifacts don’t exhibit personal, whether it’s a computer or a toaster or something. This is fairly radical view that I have, and certainly people could take issue and try to mark territory where a computer is significantly different than other artifacts, but at the end of the day something we design, I don’t think is in the proper category of being something that exhibits personhood, ever.
0:10:25.7 JM: Right. How do you walk people through that argument? ’Cause obviously maybe some people pick up your book, being true believers in AI being right around the corner and becoming sentient and stuff like that, how do you kinda walk them through where... Even if they started off at that point, they might come around to seeing it your way?
0:10:46.4 EL: You mean, how do I do it in the book?
0:10:48.8 JM: Yeah like what is the structure of the book? I haven’t read the book and people watching this interview might be interested in buying the book, but give them a little taste for what you do to walk them through this argument.
0:11:02.0 EL: Yeah. I mean, so the primary argument, the kind of meat of the book, where people... I think a broad base of readership will be able to grab on to is as I explain exactly what advanced AI systems from Google, from Facebook, from Twitter, from Amazon recommendations systems, or movie recommendations on Netflix, like real examples of cutting edge, bleeding edge AI that we are interacting with currently that represent the state of the art. I explain what they’re actually doing in terms of types of inference, and so we actually know a lot about inference. It started back all the way with Aristotle who gave us syllogism, which is a kind of deductive inference using two premises and a conclusion, and it’s been expanded through George Boole. You’re a scientist, you know all this stuff, I’m sure. And we’ve... And we formalized huge, huge pieces of inference in logical languages, and so we know a lot about inference in the entire history of intellectual thought, and so we can say a lot about what computers are doing with respect to inference. So we actually have a really good framework to say like, "What is this recommendation system at Netflix actually doing?" It turns out it’s doing induction from prior example, and if it is doing induction from prior example, it’s fairly straightforward to explain how it inherits all the problem of induction for purposes of general intelligence, and induction won’t give you general intelligence. It just won’t, and likewise, deduction.
0:12:44.4 EL: And these systems can be hybrids and use combinations of deduction and induction, for instance, what IBM’s Watson system that played Jeopardy and now does all kinds of things. They had a kind of a board of attempt in the healthcare industry, but I think it’s now sort of it’s still spreading its tentacles out as a business model for IBM under their cognitive computing labs, and... But it originally started with a hybrid system that a guy named Dave Ferrucci, who now works in Wall Street, he’s very, very smart guy, designed with a team, a very big team of people who brought a lot of human intelligence into dissecting the game of Jeopardy and getting that computer to actually be able to beat Ken Jennings and the grand masters in 2011 it actually beat all of the humans and that system uses a combination of deduction and induction, and so we call it a hybrid system.
0:13:38.9 EL: But even those hybrid systems inherit all the limitations of deduction and all the known limitations of induction, and we know we can... We very straightforwardly know that they can’t reach human general intelligence. We have something that’s called abduction, which is kind of an unfortunate word because it brings up like abducting... It brings this... It has this other connotation...
0:14:00.0 JM: Yeah.
0:14:01.0 EL: But abduction or reproduction is kind of reasoning from observation to likely hypotheses, and it does not a plausible explanations of what we see and turns out that as we go through our life, most of our life are abductions. Most of our inferences that we make just walking down the street, going to the supermarket, making sense of a conversation with your neighbor, those are all actually not inductive or deductive, but abductive inferences. So given that we need this to be general and flexible in our own human intelligence, and computers absolutely can’t reproduce that type of inference that they have to make use of induction and deduction, make these hybrid systems, and we know that these are actually the one that cannot be subsumed into the other. There’s a lodge... There are theorems that actually prove that you can’t reduce the one to the other. Well, we know that we don’t have AGI by any foreseeable extension of Google or of these companies that are saying that they’re on the brink of... We know that they’re not.
0:15:07.3 EL: We just know that they’re not. So if somebody reads the book, I make that very, very clear. I try to... So yeah.
0:15:12.9 JM: I cannot wait. So it comes out in April, right?
0:15:17.5 EL: Yeah, April 6, yeah.
0:15:19.2 JM: Oh, awesome. Thank you so much for giving us a sneak peak at your book. I cannot wait for people to start reading it, myself included, my daughter, everybody who I know who’s interested in this topic. So thank you for spending the time with us today.
0:15:33.1 EL: Well, thank you Jed, I appreciated the opportunity. erik-larson-author-jed-audio.txt Displaying erik-larson-author-jed-audio.txt.