Artificial intelligence (AI), in the simplest terms, refers to computing which aims to mimic human cognitive functions like learning, problem solving, and adaptation to environmental conditions. With the evolution of computer science, computing machines have accelerated in their capacity to demonstrate “intelligence” in areas such as reasoning, planning, natural language processing, perception, and much more. Because of the complexity of this controversial topic, it is also a popular subjective of persuasive essays.
The artificial intelligence debate topics concerns the potential dangers posed by unethical or uncontrolled artificial intelligence versus the technological benefits to humanity.
Advocates for the continued exploration of AI view this technology as a viable approach to confronting an array of challenges in computer science, data analysis, research, production, and much more. Its critics, however, warn of the dangers represented by unchecked artificial intelligence, both in terms of the threat that computers may evolve to achieve “superintelligence” and ultimately declare independence from humanity, and to the degree that technological automation and reliance on algorithmic behavior may threaten the human labor economy and reinforce existing sociological biases.
Though we often think of AI as an inherently controversial topic, artificial intelligence is actually an umbrella term for various areas of computing including robotics, machine learning, and artificial neural networking. Though these areas all initiate from the common starting point—using computing technology to simulate the behavior of the human mind—each represents a distinct area of technological exploration. In other words, AI is not just one specific innovation, but a catch-all for a wide range of innovative pursuits. Notable innovations which employ some measure of AI include self-driving cars, strategic gaming systems, and digital voice assistants. Each of these applications demonstrates the ways in which an “intelligent agent” can be taught to simulate human thinking, and consequently, to refine its own performance of tasks as conditions dictate.
Back to TopIn spite of its potential for innovation, there are many who have warned, since the inception of the AI concept in the 1950s, of its hidden perils. In fact, as the concept emerged in the public consciousness, it also became a touchstone issue for ethicists, philosophers, and writers concerned over the existential threat of AI run amok. This informs the two points of view in the AI debate. These are not diametrically opposed views, but instead, reflect the duality of artificial intelligence—both its promise and its perils. It is entirely possible for an individual to support the progress suggested by artificial intelligence but also to express reservation about the threat of unrestrained AI. With that said, a debate over AI would be framed by the following viewpoints:
Though there are some technophiles who might identify themselves as advocates for unrestrained experimentation with AI, and some detractors who might argue that all forms of AI represent a danger to humanity, the vast majority of relevant observers are more likely to take a balanced perspective. Legendary physicist Stephen Hawking captured this dichotomy well, both recognizing that artificial intelligence has created benefits for humanity in its current form and simultaneously warning that “the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
The goal of this discussion is to examine the various perspectives shaping the public discussion over artificial intelligence, and to provide you with a look at some of the figures past and present who have influenced this discussion. The figures selected may not always be household names, but are instead selected to provide a nuanced look at the public discourse on this subject, and in some cases, even to provide you with a list of individuals to contact as part of your research.
Back to TopIn addition to prompting philosophical debate, Artificial intelligence has long been a topic of fascination for science fiction authors. In fact, a great many histories of artificial intelligence begin with a reference to Mary Shelley’s 1818 novel, Frankenstein; or, The Modern Prometheus. This well-known narrative captures the duality of artificial intelligence; the unbridled will of humankind to invent, innovate, and breathe intelligent life into its own creations; and the monstrous danger these creations can represent once gifted with this intelligence. Though this famous allegorical novel would long predate the emergence of computer technology, its warning remains front and center in debates over artificial intelligence.
British mathematician and philosopher Alan Turing is widely considered the father of both theoretical computer science and artificial intelligence. His study of logic led him to invent a machine that could simulate mathematical deduction using only ones and zeroes. This Turing Machine is often considered the first incarnation of what would become the general-purpose computer, and Turing’s revelation about binary coding would become the basis for the computer algorithm. Ultimately, Turing revealed that a computational machine could simulate formal reasoning.
Turing’s findings prompted him to explore the possibility that machines could be taught to “show intelligent behavior.” This question is at the root of the first generally recognized demonstration of artificial intelligence, the formal design, by American neuroscientists Warren McCullouch and Walter Pitts, for Turing-complete “artificial neurons”—machines said to mimic the way a biological neuron functions.
While Turing’s findings sparked innovation in scientific communities around the world, the first real concentration of activity around artificial intelligence started with a 1956 workshop at Dartmouth College. Here, computer scientist and professor John McCarthy coined the term artificial intelligence. McCarthy, alongside fellow attendee’s Allen Newell, Herbert Simon, Marvin Minsky, and Arthur Samuel, became the founders of research in AI.
Their research led to a wave of new computer developments, with machines demonstrating the capacity to play checkers, solve algebraic word problems, prove logical theorems, and speak English. These developments were especially of keen interest to the Department of Defense, which began funding AI laboratories around the U.S. in the 1960s.
The Department of Defense became the leading entity in the exploration of artificial intelligence in the midst of a global Cold War and dual weapons and space races between the United States and the Soviet Union. As the U.S. explored the prospects of AI-driven weapons systems, concern over artificial intelligence dovetailed with nuclear anxiety and space exploration to produce a rich literary tradition.
As with Mary Shelley’s Frankenstein more than a century prior, numerous works of science fiction best capture the duality of artificial intelligence. Among the most prominent of such works were two released in 1968—Arthur C. Clarke’s 2001: A Space Odyssey, in which the HAL 9000 computer achieves sentience and consequently murders its human masters in the vast abyss of outer-space; and Phillip K. Dick’s Do Androids Dream of Electric Sheep?, in which the hunt for rogue androids calls into question the very nature of what it means to be human.
These are just a few of the landmark achievements in literature spawned by objective concerns over what artificial intelligence means, and what dangers it may represent to both the singularity of the human mind, and to the perpetuation of human civilization. More recently, a number of films have followed in this tradition, with prominent examples including The Terminator (1984) and The Matrix (1999), both of which present a dystopian future in which machines have turned on their human masters and have consequently pursued either the enslavement or extermination of humanity.
Though the most apocalyptic concerns were played out through science fiction, the 1970s also brought about a wave of more immediate and practical objections. The most notable of these was a report by British mathematician James Lighthill entitled “Artificial Intelligence: a paper symposium”. Commissioned by the British Science Research Council as a way to resolve discord at one of the U.K.’s most important sites for AI research, the University of Edinburgh’s Department of Artificial Intelligence, Lighthill published his report in 1973. Within, he expressed a high level of skepticism about the real value of research in AI.
After nearly two decades of relative excitement in this area of science, Lighthill was rather unimpressed by the apparent progress, arguing that research in the areas of robotics and language processing had shown little practical value outside of controlled laboratory settings. The findings initiated a backlash against AI in the form of diminished investment and experimentation. The next decade is often characterized as an AI Winter, a period in which the funding and pursuit of innovation had been cooled to a near standstill.
The 1980s saw a surge in computing capabilities, and with them, a renewed interest in the potential of artificial intelligence. Invented in the 1970s, the expert system was ultimately proliferated on a commercial level in the early 1980s. The expert system is the very first successful software program based on AI research, and employed “if-then” computer programming rules to simulate expert human decision making.
High-profile pursuits like the Fifth Generation Computer Systems project in Japan helped to touch off both a new wave of investments in the U.S. and U.K. In 1981, the first IBM PC was introduced, touching off an explosion in the commercial use of computers.
By the middle of the decade, AI had surpassed more than $1 billion in investments, even as some warned that this new wave of interest betrayed excessive optimism in the possibilities of AI computing. The optimism and skepticism collided around the Lisp programming language, which was used to pioneer various technologies including laser printing, windowing systems, computer graphic rendering and more. The large station Lisp machines used to run this software were supplanted with the onset of the microcomputer revolution. As more affordable desktop computers hit the market in the mid-’80s, the previously lucrative field of Lisp machine construction collapsed. With it, so collapsed the then-current level of enthusiasm for AI. Once again, the technology communities lapsed into an AI winter.
This would change dramatically with the late-’80s advent of Artificial Neural Networks (ANN). In the simplest terms, ANN is a computing system designed to mimic the biological neural network in which synaptic connections send information between nodes. This concept was the subject of a landmark 1989 book called Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.
The use of the biological neural network as a model proved a basic jumping off point for a new generation of developments in artificial intelligence, which advances centered around confronting challenges that had largely been beyond the reach of conventional algorithms to that point. Through the course of the 1990s and early 21st Century, these neural networks proved increasingly capable of “learning,” using sample observations to adapt to and improve efficiency at specific tasks. The result was a dramatic step forward in the ability of computing systems to confront challenges in logistics, data mining, and even more practical applications like medical diagnosis.
The starkest demonstrations of AI’s ascendance came not in laboratories, but in the form of high-profile entertainment. In 1996, world chess champion Garry Kasparov defeated the IBM supercomputer, Deep Blue under tournament conditions. The following year, Deep Blue and Kasparov met for a rematch. This time, the computer won.
This victory was echoed in 2011 when IBM’s AI-powered Watson appeared as a contestant on the trivia game show, Jeopardy! Facing off against the show’s two greatest champions, Ken Jennings and Brad Rutter, Watson was the victor each time. Underlying Watson were various layers of artificial intelligence capability, including the ability to use and comprehend natural language, evidence based learning capabilities, and its ability to generate hypotheses. Following the demonstration of its abilities on the venerable game show, Watson became the root technology in countless developments, including demographic marketing, medical diagnosis, recipe generation, customer service, and much more.
This uptick in commercial applications would touch off the current era in which the applications of artificial intelligence are relatively inextricable from the practical exploration of computer technology, and have permeated countless aspects of our everyday lives.
Using our own backstage Ranking Analytics tools, we’ve compiled a list of the most influential figures concerning the issue of artificial intelligence in the U.S. between 1900 and 2020. This list has been vetted to ensure all of those included have been influential in either the philosophical exploration, ethical examination, and technological advancement of artificial intelligence and related computing challenges. This includes computer scientists, engineers, mathematicians, ethicists, philosophers, and writers—all influencers who who have directly impacted the public debate over artificial intelligence:
Rank | Person |
---|---|
1 | Marvin Minsky |
2 | Alan Turing |
3 | Herbert A. Simon |
4 | Nick Bostrom |
5 | Ray Kurzweil |
6 | Allen Newell |
7 | Rodney Brooks |
8 | John McCarthy |
9 | Stuart J. Russell |
10 | Stephen Hawking |
Check out the Top Influential Computer Scientists Today!
Using our own backstage Ranking Analytics tools, we’ve compiled a list of the most influential books on the topic of artificial intelligence in the U.S. between 1900 and 2020. This list is vetted to exclude popular fiction which has merely used artificial intelligence as a narrative device, in favor of prominent works in philosophy, science, and science fiction which have made a direct on the public discussion through their examination of the practical and ethical dimensions of artificial intelligence.
Rank | Person |
---|---|
1 | Superintelligence: Paths, Dangers, Strategies |
2 | Neuromancer |
3 | The Mind’s I |
4 | The Age of Intelligent Machines |
5 | The Age of Spiritual Machines |
6 | The Singularity Is Near |
7 | Frankenstein |
8 | Our Final Invention |
9 | Do Androids Dream of Electric Sheep? |
10 | The Ghost in the Machine |
Check out the Top Influential Physicists Today!
Back to TopToday, we are faced less with a question about whether or not artificial intelligence has practical value. AI science has permeated our technology. This begs questions about its potential uses and both the ethical and practical concerns surrounding it. In the face of these developments, Swedish philosopher Nick Bostrom published 2014′s Superintelligence, a prominent text on the existential risks of artificial intelligence and the ways humans might confront these threats.
Bostrom’s text would seem to anticipate 2015, which is frequently recognized as a landmark year for AI. With the full proliferation of cloud computing infrastructure, it had become considerably more affordable to explore neural network technology. This development invited a whole new wave of entrants into the field of innovation, and also prompted far more ambitious exploration from major tech companies. Google alone made this the basis for more than 2700 new projects.
Leading technologists recognized both the potentials and pitfalls of this proliferation. Late in 2015, prominent technologists like Peter Thiel and Elon Musk teamed up to form OpenAI, a group dedicated to ensuring the safe and ethical pursuit of AI-based innovations. In addition to the large and existential dangers highlighted by cautious technologists, philosophers, and fiction novelists, the increased practical applications of AI have invited concern from sociologists.
For instance, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an algorithmic program used by some U.S. courts to provide case management decisions based on the assessed likelihood that a defendant will become a repeat offender. A 2016 study of the system revealed that the system overwhelmingly, and inaccurately, predicted the recidivism of black offenders, and failed to predict the recidivism of white offenders. This is considered evidence that a machine learning algorithm can be built to reflect the racial biases of its creators, or the biases of the institution in which it is deployed.
In spite of these concerns, and in spite of the skepticism voiced by detractors in the 1970s and the 1980s, Artificial intelligence is simply an infrastructural reality today. According to a 2017 survey, one in five companies indicated that they had “incorporated AI in some offerings or processes.”
Today, AI is a part of everyday lives, driving supply chain management for the big-box stores where we shop, driving daily movements on the stock market, and answering our questions under names like “Siri” and “Alexa.” The current controversy over AI is held against a backdrop of exploration and innovation. Indeed, our reliance on artificial intelligence is largely inextricable from our general reliance as a civilization on computer technology.
Check out our Interview with Dr. Burkhard Rost and find out “How machine learning boosts protein and genetic studies”
Back to TopOur goal in presenting subjects that generate controversy is to provide you with a sense of some of the figures both past and present who have driven debate, produced widely-recognized works of research, literature or art, proliferated their ideas widely, or who are identified directly and publicly with some aspect of this debate. By identifying the researchers, activists, journalists, educators, academics, and other individuals connected with this debate—and by taking a closer look at their work and contributions—we can get a clear but nuanced look at the subject matter. Rather than framing the issue as one side versus the other, we bring various dimensions of the issue into discussion with one another. This will likely include dimensions of the debate that resonate with you, some dimensions that you find repulsive, and some dimensions that might simply reveal a perspective you hadn’t previously considered.
On the subject of artificial intelligence, the debate requires us to consider both advocates of unbridled innovation and those who have publicly pondered the practical, philosophical, ethical, and existential dangers that may lurk within such innovation. Our InfluenceRanking engine gives us the power to scan the academic and public landscape surrounding the artificial intelligence issue using key terminology to identify consequential influencers. In order to present the range of viewpoints on this topic, we have explored the influence surrounding key terms in the field of technology, including “artificial intelligence,” “machine learning” and “Artificial Neural Networks.” On the opposite side of this debate are those who have expressed reservations about various aspects of AI, including those who raise concerns about “AI ethics,” who warn about the risks of “superintelligence,” and those who point to the risk of “algorithmic bias.” Also included in our look at key terminology are those from prominent organizations confronting the essential implications of AI, including the “Future of Life Institute,” and philosophical movements in advocacy of AI, such as “transhumanism.”
As with any topic that generates public debate and disagreement, this is a subject of great depth and breadth. We do not claim to probe either to the bottom of this depth or the borders of this breadth. Instead, we offer you one way to enter into this debate, to identify key players, and through their contributions to the debate, to develop a fuller understanding of the issue and perhaps even a better sense of where you stand.
For a closer look at how our InfluenceRankings work, check out our methodology.
Otherwise get started with a look at the key words we used to explore this subject:
Back to TopThe key term in our discussion, “artificial intelligence” refers to the broad field of computing technologies that have emerged from the aim of using computers to simulate the functional elements of the human mind. Influencers in this field have tended to be technologists, computer scientists, and entrepreneurs.
Falling under the umbrella term of artificial intelligence, “machine learning” refers to the ability of computers to integrate evidence of outcomes into improved processes, greater accuracy, and heightened efficiency. Like its umbrella term, machine learning tends to invoke a list of technologists, computer scientists, and mathematicians among its top influencers.
“Intelligent agents,” is the technical term for any computing machine given the ability to simulate human intelligence. Engineers and entrepreneurs have been particularly influential in the development of practical intelligent agents.
As AI technology has emerged to greater prominence and practical application, so too have concerns emerged about the ethical deployment of this technology. This has prompted an array of philosophers, activists, and public servants to write or organize around the need for measured exploration of AI’s potential. Influencers in this area have advocated for caution, or “machine ethics,” even in the midst of our drive to innovate.
The advent of “Artificial Neural Networks” (ANN) in the late ’80s spurred the wave of AI innovations that permeate commerce and technology today. In simplified terms, a simulation of the biological neural network, this is at the root of countless innovations by companies like Google, Facebook, and Twitter. Leading influencers in this area are typically computer scientists and mathematicians.
Closely tied to the concept of machine ethics, “AI control” suggests that humanity must take steps to control any innovations in AI so as to preempt the risks of misguided application and superintelligence. This area has largely been the province of influential robotics engineers.
“Superintelligence” refers to the perceived danger that artificial intelligence can ultimately lead to machines which are self-aware and superior to humanity in their reasoning and learning capacities. This danger is at the root of many philosophical concerns over the existential threat posed by intelligent agents, and also informs the premise at the heart of many prominent works of dystopian science fiction.
Rooted in the anxieties related to artificial intelligence, the “Future of Life Institute” was formed to question and confront the potential existential dangers to human perpetuation posed by superintelligent machinery. Influencers in this community include leaders from a wide range of academic backgrounds including computer science, cosmology, economics, and more.
In addition to far-reaching concerns about the existential threat posed by intelligent machines, there are immediate and practical concerns about the impact of AI-based technology already in use today. To this end, the concept of algorithmic bias suggests that machine learning can be informed by the root biases of those programming or using such algorithms. Influencers in this area may be found at the practical intersection of computer science and sociology.
Transhumanism refers to a particularly optimistic outlook on artificial intelligence, one that emphasizes the perceived benefits of combining computer technology with human biology for practical purposes. This philosophical school of thought points to the opportunities for the enhancement of human intellect and physiology through integration with sophisticated technology. While this philosophy addresses the dangers and ethical quandaries around artificial intelligence, its adherents—largely philosophers, futurists, and technologists—believe technological enhancement is contributing to the evolution of the human species.
If you would like to study this topic in more depth, check out these key organizations...
I has quickly integrated into our daily lives. In a study by Statista, the AI market is expected to grow up to 54% every year. But will it serve and do good to humankind in the future? And what are the pros and cons of artificial intelligence?
This is the biggest advantage of artificial intelligence. AI can reduce errors and improve precision and accuracy.
Decisions made by AI are based on a set of algorithms—algorithms that have been done, redone, and improved. Over time, errors are reduced and the action or program reaches a level of perfection.
AI robots can do things and go places that humans are not able to. Examples include going into outer space, defusing a bomb, or exploring the deepest parts of the oceans. Robots can also perform tasks with greater precision and don’t tire as a human performing a task would.
Studies prove that humans can only be productive for about 3-4 hours daily. But robots are able to work without a break.
They can also make decisions and calculations more quickly than humans and complete tasks with more accuracy. Robots are also able to complete dull, boring, and repetitive jobs without fatigue.
It is very common today for websites to use digital assistants to support their customers. Chatbots are perfect examples of this type of digital assistant. Chatbots have improved so much over the years, that it is sometimes difficult to recognize whether you are talking to a human or to a chatbot.
AI is the powerhouse behind various inventions and innovations in practically every field imaginable. For example, the latest improvements in AI-based technologies now allow doctors to spot breast cancer at an earlier stage.
Whether we admit it or not, humans decide mostly based on emotions. But unlike humans, AI does not have biased views, which equates to more accurate and rational decision-making.
Creating a machine that can replicate human intelligence is not a small feat. The development requires significant resources, time, and money.
AI machines work according to how they are programmed. They cannot “think outside the box.”
While these machines can learn over time with past experiences and pre-fed data, they cannot be creative in their approach. One example is the bot Quill, capable of writing Forbes earning reports.
The reports are factually correct, which sounds very impressive on the surface. However, the reports lack that classic human touch usually notable in other Forbes articles.
Robots can potentially displace workers in certain occupations and increase unemployment. Some even claim that robots and chatbots may replace human workers altogether.
Interested in building toward a career on the front lines of the exploration or regulation of artificial intelligence? As you can see, there are many different avenues into this far-reaching issue. Use our Custom College Ranking to find:
Interested in diving into another one of our controversial topics? Check out The 30 Most Controversial Topics Today!
Visit our Study Guide Headquarters for tips, tools, and much more.
See our Resources Guide for much more on studying, starting your job search, and more.
Photo by Hitesh Choudhary on Unsplash