Controversial Topic: Artificial Intelligence

Controversial Topic: Artificial Intelligence

Artificial intelligence (AI), in the simplest terms, refers to computing which aims to mimic human cognitive functions like learning, problem solving, and adaptation to environmental conditions. With the evolution of computer science, computing machines have accelerated in their capacity to demonstrate “intelligence” in areas such as reasoning, planning, natural language processing, perception, and much more. Because of the complexity of this controversial topic, it is also a popular subjective of persuasive essays.

Key Takeaways

  • Artificial intelligence has transformed how people think, work, and learn in healthcare, banking, and smartphone applications.
  • Every day, people fail to realize how AI surrounds us, from Siri to Alexa, social media applications, and numerous virtual player games. However, like any other controversial subject, artificial intelligence has its pros and cons.
  • In the world of artificial intelligence, opportunities are endless. There are many compelling reasons why this field is something you should not take lightly. Studying AI prepares you for careers in software engineering, research, quantum artificial intelligence, and many more.

The artificial intelligence debate topics concerns the potential dangers posed by unethical or uncontrolled artificial intelligence versus the technological benefits to humanity.

Advocates for the continued exploration of AI view this technology as a viable approach to confronting an array of challenges in computer science, data analysis, research, production, and much more. Its critics, however, warn of the dangers represented by unchecked artificial intelligence, both in terms of the threat that computers may evolve to achieve “superintelligence” and ultimately declare independence from humanity, and to the degree that technological automation and reliance on algorithmic behavior may threaten the human labor economy and reinforce existing sociological biases.

What is Artificial Intelligence?

Though we often think of AI as an inherently controversial topic, artificial intelligence is actually an umbrella term for various areas of computing including robotics, machine learning, and artificial neural networking. Though these areas all initiate from the common starting point—using computing technology to simulate the behavior of the human mind—each represents a distinct area of technological exploration. In other words, AI is not just one specific innovation, but a catch-all for a wide range of innovative pursuits. Notable innovations which employ some measure of AI include self-driving cars, strategic gaming systems, and digital voice assistants. Each of these applications demonstrates the ways in which an “intelligent agent” can be taught to simulate human thinking, and consequently, to refine its own performance of tasks as conditions dictate.

Back to Top

Framing the AI Debate

In spite of its potential for innovation, there are many who have warned, since the inception of the AI concept in the 1950s, of its hidden perils. In fact, as the concept emerged in the public consciousness, it also became a touchstone issue for ethicists, philosophers, and writers concerned over the existential threat of AI run amok. This informs the two points of view in the AI debate. These are not diametrically opposed views, but instead, reflect the duality of artificial intelligence—both its promise and its perils. It is entirely possible for an individual to support the progress suggested by artificial intelligence but also to express reservation about the threat of unrestrained AI. With that said, a debate over AI would be framed by the following viewpoints:

  • To its advocates, artificial intelligence represents a way of leveraging computing technology to confront practical challenges, improve lifestyle convenience, enhance public safety, heighten security capabilities, advance military operations, produce innovative recreational opportunities, and much more. The promise of what AI-powered computers can do for humanity, believe its advocates, justifies the bold experimentation around this technology.
  • To its detractors, artificial intelligence is an existential threat to humanity on several levels. From a philosophical and ethical perspective, critics have expressed concern that attempts at simulating human thinking may threaten the singularity of human sentience with potentially catastrophic results. Many of these catastrophic results have become the fodder of dystopian science fiction, where authors have explored post-apocalyptic realities prompted by artificial intelligence which has become “superintelligent,” and thus, autonomous. In its autonomy, machinery may consequently pose a threat to human continuity. More immediate concerns cited by detractors include the danger that increased automated machinery in various industry might ultimately replace human workers and lead to mass unemployment and the risk of inherent racial and socioeconomic bias in machine-learning algorithms.

Though there are some technophiles who might identify themselves as advocates for unrestrained experimentation with AI, and some detractors who might argue that all forms of AI represent a danger to humanity, the vast majority of relevant observers are more likely to take a balanced perspective. Legendary physicist Stephen Hawking captured this dichotomy well, both recognizing that artificial intelligence has created benefits for humanity in its current form and simultaneously warning that “the development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

The goal of this discussion is to examine the various perspectives shaping the public discussion over artificial intelligence, and to provide you with a look at some of the figures past and present who have influenced this discussion. The figures selected may not always be household names, but are instead selected to provide a nuanced look at the public discourse on this subject, and in some cases, even to provide you with a list of individuals to contact as part of your research.

Back to Top

A Brief History of The Issue

In addition to prompting philosophical debate, Artificial intelligence has long been a topic of fascination for science fiction authors. In fact, a great many histories of artificial intelligence begin with a reference to Mary Shelley’s 1818 novel, Frankenstein; or, The Modern Prometheus. This well-known narrative captures the duality of artificial intelligence; the unbridled will of humankind to invent, innovate, and breathe intelligent life into its own creations; and the monstrous danger these creations can represent once gifted with this intelligence. Though this famous allegorical novel would long predate the emergence of computer technology, its warning remains front and center in debates over artificial intelligence.

The Turing Machine (1936)

British mathematician and philosopher Alan Turing is widely considered the father of both theoretical computer science and artificial intelligence. His study of logic led him to invent a machine that could simulate mathematical deduction using only ones and zeroes. This Turing Machine is often considered the first incarnation of what would become the general-purpose computer, and Turing’s revelation about binary coding would become the basis for the computer algorithm. Ultimately, Turing revealed that a computational machine could simulate formal reasoning.

Turing’s findings prompted him to explore the possibility that machines could be taught to “show intelligent behavior.” This question is at the root of the first generally recognized demonstration of artificial intelligence, the formal design, by American neuroscientists Warren McCullouch and Walter Pitts, for Turing-complete “artificial neurons”—machines said to mimic the way a biological neuron functions.

Dartmouth College AI workshop (1956)

While Turing’s findings sparked innovation in scientific communities around the world, the first real concentration of activity around artificial intelligence started with a 1956 workshop at Dartmouth College. Here, computer scientist and professor John McCarthy coined the term artificial intelligence. McCarthy, alongside fellow attendee’s Allen Newell, Herbert Simon, Marvin Minsky, and Arthur Samuel, became the founders of research in AI.

Their research led to a wave of new computer developments, with machines demonstrating the capacity to play checkers, solve algebraic word problems, prove logical theorems, and speak English. These developments were especially of keen interest to the Department of Defense, which began funding AI laboratories around the U.S. in the 1960s.

Artificial intelligence in Science Fiction (1968)

The Department of Defense became the leading entity in the exploration of artificial intelligence in the midst of a global Cold War and dual weapons and space races between the United States and the Soviet Union. As the U.S. explored the prospects of AI-driven weapons systems, concern over artificial intelligence dovetailed with nuclear anxiety and space exploration to produce a rich literary tradition.

As with Mary Shelley’s Frankenstein more than a century prior, numerous works of science fiction best capture the duality of artificial intelligence. Among the most prominent of such works were two released in 1968—Arthur C. Clarke’s 2001: A Space Odyssey, in which the HAL 9000 computer achieves sentience and consequently murders its human masters in the vast abyss of outer-space; and Phillip K. Dick’s Do Androids Dream of Electric Sheep?, in which the hunt for rogue androids calls into question the very nature of what it means to be human.

These are just a few of the landmark achievements in literature spawned by objective concerns over what artificial intelligence means, and what dangers it may represent to both the singularity of the human mind, and to the perpetuation of human civilization. More recently, a number of films have followed in this tradition, with prominent examples including The Terminator (1984) and The Matrix (1999), both of which present a dystopian future in which machines have turned on their human masters and have consequently pursued either the enslavement or extermination of humanity.

Sir James Lighthill and the AI Winter (1973)

Though the most apocalyptic concerns were played out through science fiction, the 1970s also brought about a wave of more immediate and practical objections. The most notable of these was a report by British mathematician James Lighthill entitled “Artificial Intelligence: a paper symposium”. Commissioned by the British Science Research Council as a way to resolve discord at one of the U.K.’s most important sites for AI research, the University of Edinburgh’s Department of Artificial Intelligence, Lighthill published his report in 1973. Within, he expressed a high level of skepticism about the real value of research in AI.

After nearly two decades of relative excitement in this area of science, Lighthill was rather unimpressed by the apparent progress, arguing that research in the areas of robotics and language processing had shown little practical value outside of controlled laboratory settings. The findings initiated a backlash against AI in the form of diminished investment and experimentation. The next decade is often characterized as an AI Winter, a period in which the funding and pursuit of innovation had been cooled to a near standstill.

Expert Systems and the Lisp Machine market collapse (1980s)

The 1980s saw a surge in computing capabilities, and with them, a renewed interest in the potential of artificial intelligence. Invented in the 1970s, the expert system was ultimately proliferated on a commercial level in the early 1980s. The expert system is the very first successful software program based on AI research, and employed “if-then” computer programming rules to simulate expert human decision making.

High-profile pursuits like the Fifth Generation Computer Systems project in Japan helped to touch off both a new wave of investments in the U.S. and U.K. In 1981, the first IBM PC was introduced, touching off an explosion in the commercial use of computers.

By the middle of the decade, AI had surpassed more than $1 billion in investments, even as some warned that this new wave of interest betrayed excessive optimism in the possibilities of AI computing. The optimism and skepticism collided around the Lisp programming language, which was used to pioneer various technologies including laser printing, windowing systems, computer graphic rendering and more. The large station Lisp machines used to run this software were supplanted with the onset of the microcomputer revolution. As more affordable desktop computers hit the market in the mid-’80s, the previously lucrative field of Lisp machine construction collapsed. With it, so collapsed the then-current level of enthusiasm for AI. Once again, the technology communities lapsed into an AI winter.

Artificial Neural Networks (1989)

This would change dramatically with the late-’80s advent of Artificial Neural Networks (ANN). In the simplest terms, ANN is a computing system designed to mimic the biological neural network in which synaptic connections send information between nodes. This concept was the subject of a landmark 1989 book called Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.

The use of the biological neural network as a model proved a basic jumping off point for a new generation of developments in artificial intelligence, which advances centered around confronting challenges that had largely been beyond the reach of conventional algorithms to that point. Through the course of the 1990s and early 21st Century, these neural networks proved increasingly capable of “learning,” using sample observations to adapt to and improve efficiency at specific tasks. The result was a dramatic step forward in the ability of computing systems to confront challenges in logistics, data mining, and even more practical applications like medical diagnosis.

The starkest demonstrations of AI’s ascendance came not in laboratories, but in the form of high-profile entertainment. In 1996, world chess champion Garry Kasparov defeated the IBM supercomputer, Deep Blue under tournament conditions. The following year, Deep Blue and Kasparov met for a rematch. This time, the computer won.

This victory was echoed in 2011 when IBM’s AI-powered Watson appeared as a contestant on the trivia game show, Jeopardy! Facing off against the show’s two greatest champions, Ken Jennings and Brad Rutter, Watson was the victor each time. Underlying Watson were various layers of artificial intelligence capability, including the ability to use and comprehend natural language, evidence based learning capabilities, and its ability to generate hypotheses. Following the demonstration of its abilities on the venerable game show, Watson became the root technology in countless developments, including demographic marketing, medical diagnosis, recipe generation, customer service, and much more.

This uptick in commercial applications would touch off the current era in which the applications of artificial intelligence are relatively inextricable from the practical exploration of computer technology, and have permeated countless aspects of our everyday lives.

Top Ten Historical Influencers in the Artificial intelligence Controversy

Using our own backstage Ranking Analytics tools, we’ve compiled a list of the most influential figures concerning the issue of artificial intelligence in the U.S. between 1900 and 2020. This list has been vetted to ensure all of those included have been influential in either the philosophical exploration, ethical examination, and technological advancement of artificial intelligence and related computing challenges. This includes computer scientists, engineers, mathematicians, ethicists, philosophers, and writers—all influencers who who have directly impacted the public debate over artificial intelligence:

Top Ten Historical Influencers in the Artificial intelligence Controversy
RankPerson
1Marvin Minsky
2Alan Turing
3Herbert A. Simon
4Nick Bostrom
5Ray Kurzweil
6Allen Newell
7Rodney Brooks
8John McCarthy
9Stuart J. Russell
10Stephen Hawking

Check out the Top Influential Computer Scientists Today!

Top Ten Most Influential Books About Artificial intelligence

Using our own backstage Ranking Analytics tools, we’ve compiled a list of the most influential books on the topic of artificial intelligence in the U.S. between 1900 and 2020. This list is vetted to exclude popular fiction which has merely used artificial intelligence as a narrative device, in favor of prominent works in philosophy, science, and science fiction which have made a direct on the public discussion through their examination of the practical and ethical dimensions of artificial intelligence.

Top Ten Most Influential Books About Artificial intelligence
RankPerson
1Superintelligence: Paths, Dangers, Strategies
2Neuromancer
3The Mind’s I
4The Age of Intelligent Machines
5The Age of Spiritual Machines
6The Singularity Is Near
7Frankenstein
8Our Final Invention
9Do Androids Dream of Electric Sheep?
10The Ghost in the Machine

Check out the Top Influential Physicists Today!

Back to Top

The Current Controversy

Today, we are faced less with a question about whether or not artificial intelligence has practical value. AI science has permeated our technology. This begs questions about its potential uses and both the ethical and practical concerns surrounding it. In the face of these developments, Swedish philosopher Nick Bostrom published 2014′s Superintelligence, a prominent text on the existential risks of artificial intelligence and the ways humans might confront these threats.

Bostrom’s text would seem to anticipate 2015, which is frequently recognized as a landmark year for AI. With the full proliferation of cloud computing infrastructure, it had become considerably more affordable to explore neural network technology. This development invited a whole new wave of entrants into the field of innovation, and also prompted far more ambitious exploration from major tech companies. Google alone made this the basis for more than 2700 new projects.

Leading technologists recognized both the potentials and pitfalls of this proliferation. Late in 2015, prominent technologists like Peter Thiel and Elon Musk teamed up to form OpenAI, a group dedicated to ensuring the safe and ethical pursuit of AI-based innovations. In addition to the large and existential dangers highlighted by cautious technologists, philosophers, and fiction novelists, the increased practical applications of AI have invited concern from sociologists.

For instance, Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is an algorithmic program used by some U.S. courts to provide case management decisions based on the assessed likelihood that a defendant will become a repeat offender. A 2016 study of the system revealed that the system overwhelmingly, and inaccurately, predicted the recidivism of black offenders, and failed to predict the recidivism of white offenders. This is considered evidence that a machine learning algorithm can be built to reflect the racial biases of its creators, or the biases of the institution in which it is deployed.

In spite of these concerns, and in spite of the skepticism voiced by detractors in the 1970s and the 1980s, Artificial intelligence is simply an infrastructural reality today. According to a 2017 survey, one in five companies indicated that they had “incorporated AI in some offerings or processes.”

Today, AI is a part of everyday lives, driving supply chain management for the big-box stores where we shop, driving daily movements on the stock market, and answering our questions under names like “Siri” and “Alexa.” The current controversy over AI is held against a backdrop of exploration and innovation. Indeed, our reliance on artificial intelligence is largely inextricable from our general reliance as a civilization on computer technology.

Check out our Interview with Dr. Burkhard Rost and find out “How machine learning boosts protein and genetic studies”

Back to Top

A Quick Overview of Our Method

Our goal in presenting subjects that generate controversy is to provide you with a sense of some of the figures both past and present who have driven debate, produced widely-recognized works of research, literature or art, proliferated their ideas widely, or who are identified directly and publicly with some aspect of this debate. By identifying the researchers, activists, journalists, educators, academics, and other individuals connected with this debate—and by taking a closer look at their work and contributions—we can get a clear but nuanced look at the subject matter. Rather than framing the issue as one side versus the other, we bring various dimensions of the issue into discussion with one another. This will likely include dimensions of the debate that resonate with you, some dimensions that you find repulsive, and some dimensions that might simply reveal a perspective you hadn’t previously considered.

On the subject of artificial intelligence, the debate requires us to consider both advocates of unbridled innovation and those who have publicly pondered the practical, philosophical, ethical, and existential dangers that may lurk within such innovation. Our InfluenceRanking engine gives us the power to scan the academic and public landscape surrounding the artificial intelligence issue using key terminology to identify consequential influencers. In order to present the range of viewpoints on this topic, we have explored the influence surrounding key terms in the field of technology, including “artificial intelligence,” “machine learning” and “Artificial Neural Networks.” On the opposite side of this debate are those who have expressed reservations about various aspects of AI, including those who raise concerns about “AI ethics,” who warn about the risks of “superintelligence,” and those who point to the risk of “algorithmic bias.” Also included in our look at key terminology are those from prominent organizations confronting the essential implications of AI, including the “Future of Life Institute,” and philosophical movements in advocacy of AI, such as “transhumanism.”

As with any topic that generates public debate and disagreement, this is a subject of great depth and breadth. We do not claim to probe either to the bottom of this depth or the borders of this breadth. Instead, we offer you one way to enter into this debate, to identify key players, and through their contributions to the debate, to develop a fuller understanding of the issue and perhaps even a better sense of where you stand.

For a closer look at how our InfluenceRankings work, check out our methodology.

Otherwise get started with a look at the key words we used to explore this subject:

Back to Top

Key Terms

Artificial Intelligence (AI)

The key term in our discussion, “artificial intelligence” refers to the broad field of computing technologies that have emerged from the aim of using computers to simulate the functional elements of the human mind. Influencers in this field have tended to be technologists, computer scientists, and entrepreneurs.

Influencers:

  • Kristian Kersting is head of the Artificial Intelligence and Machine Learning Lab and professor of Artificial Intelligence and machine learning at the Technische Universität Darmstadt’s department of computer science. He earned a Ph.D. in computer science from the University of Freiburg before completing post-doctoral studies at Katholeike Universiteit Leuven and the Massachusetts Institute of Technology. His research has focused on deep probabilistic learning, artificial intelligence, statistical relational artificial intelligence, and probabilistic programming. He is a researcher at ATHENE, which is the largest national research facility devoted to IT security in all of Europe. He formerly led a research team at the Fraunhofer Institute for Intelligent Analysis and Information Systems.
  • Christian Guttmann is a German-Australian scientist and entrepreneur in artificial intelligence, Machine Learning and Data Science. He is currently the vice president, global head of Artificial intelligence and Chief Artificial Intelligence and Data at TietoEVRY. At TietoEVRY, he is responsible for strategy and execution of Artificial Intelligence Innovation and Business. He is an adjunct associate professor at the University of New South Wales, Australia and Adjunct researcher at the Karolinska Institute, Sweden. Guttmann is an entrepreneur, and co-founded startups where he led the business and product development using artificial intelligence technology. He has edited and authored 7 books, over 50 publications and 4 patents in the field of artificial intelligence. He is a keynote speaker at international events, including the International Council for Information Technology in Government Administration and CeBIT and is frequently cited by international media, such as the MIT Sloan Management review and Bloomberg.
  • Stuart J. Russell is the founder of the Center for Human-Compatible Artificial Intelligence and professor of computer science at the University of California at Berkeley, adjunct professor of neurological surgery at the University of California at San Francisco and computer scientist. He earned a B.A. in physics from Wadham College at Oxford and a Ph.D. in computer science from Stanford University. He is best known as the co-author of Artificial Intelligence: A Modern Approach, which is the most popular textbook on the subject. He is an active researcher in the field, exploring the history and future of artificial intelligence, machine learning, knowledge representation, probabilistic reasoning, inverse reinforcement learning and multitarget tracking.
  • Fei-Fei Li is a Chinese-born American computer scientist, non-profit executive, and writer. She is the Sequoia Capital Professor of Computer Science at Stanford University. Li is a Co-Director of the Stanford Institute for Human-Centered Artificial Intelligence, and a Co-Director of the Stanford Vision and Learning Lab. She served as the director of the Stanford Artificial Intelligence Laboratory from 2013 to 2018. In 2017, she co-founded AI4ALL, a nonprofit organization working to increase diversity and inclusion in the field of artificial intelligence. Her research expertise includes artificial intelligence , machine learning, deep learning, computer vision, and cognitive neuroscience. She was the leading scientist and principal investigator of ImageNet.

Machine Learning

Falling under the umbrella term of artificial intelligence, “machine learning” refers to the ability of computers to integrate evidence of outcomes into improved processes, greater accuracy, and heightened efficiency. Like its umbrella term, machine learning tends to invoke a list of technologists, computer scientists, and mathematicians among its top influencers.

Influencers:

  • John Stewart Shawe-Taylor is Director of the Centre for Computational Statistics and Machine Learning at University College, London. His main research area is statistical learning theory. He has contributed to a number of fields ranging from graph theory through cryptography to statistical learning theory and its applications. However, his main contributions have been in the development of the analysis and subsequent algorithmic definition of principled machine learning algorithms founded in statistical learning theory. This work has helped to drive a fundamental rebirth in the field of machine learning with the introduction of kernel methods and support vector machines, including the mapping of these approaches onto novel domains including work in computer vision, document classification and brain scan analysis. More recently he has worked on interactive learning and reinforcement learning. He has also been instrumental in assembling a series of influential European Networks of Excellence. The scientific coordination of these projects has influenced a generation of researchers and promoted the widespread uptake of machine learning in both science and industry that we are currently witnessing. He has published over 300 papers with over 42000 citations. Two books co-authored with Nello Cristianini have become standard monographs for the study of kernel methods and support vector machines and together have attracted 21000 citations. He is Head of the Computer Science Department at University College London, where he has overseen a significant expansion and witnessed its emergence as the highest ranked Computer Science Department in the UK in the 2014 UK Research Evaluation Framework.
  • Thomas G. Dietterich is emeritus professor of computer science at Oregon State University. He is one of the founders of the field of machine learning. He served as executive editor of Machine Learning and helped co-found the Journal of Machine Learning Research. In response to the media’s attention on the dangers of artificial intelligence, Dietterich has been quoted for an academic perspective to a broad range of media outlets including National Public Radio, Business Insider, Microsoft Research, CNET, and The Wall Street Journal.
  • Tom Michael Mitchell is an American computer scientist and E. Fredkin University Professor at the Carnegie Mellon University. He is a former Chair of the Machine Learning Department at CMU. Mitchell is known for his contributions to the advancement of machine learning, artificial intelligence, and cognitive neuroscience and is the author of the textbook Machine Learning. He is a member of the United States National Academy of Engineering since 2010. He is also a Fellow of the American Association for the Advancement of Science and a Fellow the Association for the Advancement of Artificial intelligence. In October 2018, Mitchell was appointed as the Interim Dean of the School of Computer Science at Carnegie Mellon.
  • Douglas Bruce Lenat is the CEO of Cycorp, Inc. of Austin, Texas, and has been a prominent researcher in artificial intelligence; he was awarded the biannual IJCAI Computers and Thought Award in 1976 for creating the machine learning program, AM. He has worked on machine learning, knowledge representation, “cognitive economy”, blackboard systems, and what he dubbed in 1984 “ontological engineering”. He has also worked in military simulations, and numerous projects for US government, military, intelligence, and scientific organizations. In 1980, he published a critique of conventional random-mutation Darwinism. He authored a series of articles in the Journal of Artificial intelligence exploring the nature of heuristic rules.

Intelligent Agents

“Intelligent agents,” is the technical term for any computing machine given the ability to simulate human intelligence. Engineers and entrepreneurs have been particularly influential in the development of practical intelligent agents.

Influencers:

  • Evan Hurwitz is a South African Information Engineer. He obtained his BSc Engineering, his MSc Engineering from the University of the Witwatersrand and PhD from the University of Johannesburg. He is known for his work on teaching a computer how to bluff which was widely covered by the magazine New Scientist. Hurwitz together with Tshilidzi Marwala proposed that there is less level of information asymmetry between two artificial intelligent agents than between two human agents and that the more artificial intelligence there is in the market the less is the volume of trades in the market.
  • Wolfgang Ketter is Chaired Professor of Information Systems for a Sustainable Society at the University of Cologne and a prominent scientist in the application of artificial intelligence, machine learning and intelligent agents in the design of smart markets, including demand response mechanisms and in particular automated auctions. He is a co-founder of the open energy system platform Power TAC, an automated retail electricity trading platform that simulates the performance of retail markets in an increasingly prosumer- and renewable-energy-influenced electricity landscape.
  • Barney Pell is an American entrepreneur, angel investor and computer scientist. He was co-founder, Vice Chairman and Chief Strategy Officer of Moon Express; co-founder and Chairman of LocoMobi; and Associate Founder of Singularity University. He was co-founder and CEO of Powerset, a pioneering natural language search startup, search strategist, and architect for Microsoft’s Bing search engine, a pioneer in the field of General Game Playing in artificial intelligence, and the architect of the first intelligent agent to fly onboard and control a spacecraft.

Machine Ethics

As AI technology has emerged to greater prominence and practical application, so too have concerns emerged about the ethical deployment of this technology. This has prompted an array of philosophers, activists, and public servants to write or organize around the need for measured exploration of AI’s potential. Influencers in this area have advocated for caution, or “machine ethics,” even in the midst of our drive to innovate.

Influencers:

  • Lucy Suchman is a Professor of Anthropology of Science and Technology in the Department of Sociology at Lancaster University, in the United Kingdom. Her current research extends her longstanding critical engagement with the field of human-computer interaction to the domain of contemporary war fighting, including problems of ‘situational awareness’ in military training and simulation, and in the design and deployment of automated weapon systems. At the center of this research is the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world. Suchman is a member of International Committee for Robot Arms Control and the author of the blog dedicated to the problems of ethical robotics and ‘technocultures of humanlike machines’ Before coming to Lancaster, she worked for 22 years at Xerox’s Palo Alto Research Center, where she held the positions of Principal Scientist and Manager of the Work Practice and Technology research group. Suchman is a graduate of the University of California, Berkeley, obtaining her BA in 1972, MA in 1977, and Doctorate in Social and Cultural Anthropology in 1984. While at Berkeley, she wrote her dissertation on the work practices of accountants. She studied procedural office work to understand how it was similar to and different from a program, assumptions around the work, and how the work informed the design of these systems.
  • Mark Coeckelbergh is a Belgian philosopher of technology. He is Professor of Philosophy of Media and Technology at the Department of Philosophy of the University of Vienna and former President of the Society for Philosophy and Technology. He was previously Professor of Technology and Social Responsibility at De Montfort University in Leicester, UK, Managing Director of the 3TU Centre for Ethics and Technology, and a member of the Philosophy Department of the University of Twente. Before moving to Austria, he has lived and worked in Belgium, the UK, and the Netherlands. He is the author of several books, including Growing Moral Relations, Human Being @ Risk, Environmental Skill, Money Machines, New Romantic Cyborgs, Moved by Machines, the textbook Introduction to Philosophy of Technology, and AI Ethics. He has written many articles and is an expert in ethics of artificial intelligence. He is best known for his work in philosophy of technology and ethics of robotics and artificial intelligence, he has also published in the areas of moral philosophy and environmental philosophy.
  • Marina Denise Anne Jirotka is Professor of Human Centred Computing at the University of Oxford, Governing Body Fellow at St Cross College, Board Member of the Society for Computers and Law and a Research Associate at the Oxford Internet Institute. She leads a team that works on responsible innovation, in a range of ICT fields including robotics, AI, machine learning, quantum computing, social media and the digital economy. She is known for her work on the ‘Ethical Black Box’, a proposal that robots using AI should be fitted with a type of inflight recorder, similar to those used by aircraft, to track the decisions and actions of the AI when operating in an uncontrolled environment and to aid in post-accident investigations.

Artificial Neural Networks

The advent of “Artificial Neural Networks” (ANN) in the late ’80s spurred the wave of AI innovations that permeate commerce and technology today. In simplified terms, a simulation of the biological neural network, this is at the root of countless innovations by companies like Google, Facebook, and Twitter. Leading influencers in this area are typically computer scientists and mathematicians.

Influencers:

  • Geoffrey Hinton has been called one of the “Godfathers of Artificial Intelligence” by media sources for his work on a neural network system known as “Deep Learning.” He divides his year between working for Google Brain, the influential AI group at Google, and as a professor of computer science at the University of Toronto in Canada. Hinton, along with researchers David Rumelhart and Ronald Wilson, designed one of the key features in modern neural networks, a type of machine learning algorithm that learns from experience. In 1986, he published a description of using backpropagation to train neural networks on data, and this technique has become a lynchpin for all neural network successes to date. Hinton truly is one of the “godfathers” of AI, an honorific especially relevant today as major Web companies like Google, Facebook, Twitter, and many others now use neural networks ubiquitously.
  • Walter Harry Pitts, Jr. was a logician who worked in the field of computational neuroscience. He proposed landmark theoretical formulations of neural activity and generative processes that influenced diverse fields such as cognitive sciences and psychology, philosophy, neurosciences, computer science, artificial neural networks, cybernetics and artificial intelligence, together with what has come to be known as the generative sciences. He is best remembered for having written along with Warren McCulloch, a seminal paper in scientific history, titled “A Logical Calculus of Ideas Immanent in Nervous Activity”. This paper proposed the first mathematical model of a neural network. The unit of this model, a simple formalized neuron, is still the standard of reference in the field of neural networks. It is often called a McCulloch-Pitts neuron. Prior to that paper, he formalized his ideas regarding the fundamental steps to building a Turing machine in The Bulletin of Mathematical Biophysics in an essay titled “Some observations on the simple neuron circuit”.
  • Šarūnas Raudys is head of the Data Analysis Department at the Institute of Mathematics and Informatics in Vilnius, Lithuania. Within the department, he is guiding the data mining and artificial neural networks group. His group’s research interests include multivariate analysis, statistical pattern recognition, artificial neural networks, data mining methods, and biological information processing systems with applications to analysis of technological, economical, and biological problems.

AI Control

Closely tied to the concept of machine ethics, “AI control” suggests that humanity must take steps to control any innovations in AI so as to preempt the risks of misguided application and superintelligence. This area has largely been the province of influential robotics engineers.

Influencers:

  • Ann Patricia “Pat” Fothergill was a pioneer in robotics and robot control languages in the AI department of the University of Edinburgh. She moved to the University of Aberdeen in 1986 to join the Department of Computing as a senior lecturer, where she remained until her death.
  • Reddy is the founding director of the Robotics Institute at Carnegie Mellon University. The Robotics Institute at CMU is perhaps the world’s best center for robotics research in the world. (Robotics Business Review called it a “pacesetter in robotics research and education” in 2014.) Reddy is the Moza Bint Nasser Chair of Computer Science at CMU, and has been professor of computer science at Stanford during his stellar career of five decades. Reddy has had a major influence on the development of the field of robotics in artificial intelligence. Reddy was born in rural India, and is the first member of his family to go to college. He received his bachelor’s degree in civil engineering from an engineering school in India (College of Engineering, Guindy) and his Ph.D. in computer science from Stanford University. Reddy has been active in helping to create opportunities for gifted low-income youth in India by helping to form a technical university in India (the Rajiv Gandhi University of Knowledge Technologies).

Superintelligence

“Superintelligence” refers to the perceived danger that artificial intelligence can ultimately lead to machines which are self-aware and superior to humanity in their reasoning and learning capacities. This danger is at the root of many philosophical concerns over the existential threat posed by intelligent agents, and also informs the premise at the heart of many prominent works of dystopian science fiction.

Influencers:

  • Max Tegmark was born in Stockholm, Sweden in 1967. He received a BA in economics from the Stockholm School of Economics and completed a BSc degree in physics a year later at the KTH Royal Institute of Technology. He then ventured to the University of California, Berkeley where he earned an MA and PhD studying physics. He is a professor at the Massachusetts Institute of Technology and the Scientific Director of the Foundational Questions Institute. He is also a co-founder along with Skype founder Jaan Tallinn of the Future of Life Institute, which examines issues of existential risk, particularly the issues some think we face by the advance of artificial intelligence leading to superintelligent machines.
  • Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher and writer best known for popularizing the idea of friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion was an influence on Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. An autodidact, Yudkowsky did not attend high school or college.
  • Nick Bostrom is the founding director of the Future of Humanity Institute at Oxford University, the Oxford Martin Programme on the Impacts of Future Technology, and a philosopher at Oxford. He has earned a B.A. from the University of Gothenburg, an M.A. from Stockholm University, an M.Sc. from King’s College of London, and a Ph.D. from the London School of Economics. Bostrom is best known for his work on superintelligence, human enhancement ethics, the anthropic principle and existential risk. He has also written two major books: Anthropic Bias: Observation Selection Effects in Science and Philosophy and Superintelligence: Paths, Dangers, Strategies. Superintelligence was particularly well-received, honored as a New York Times bestseller list and promoted by top minds such as Bill Gates and Elon Musk.

Future of Life Institute

Rooted in the anxieties related to artificial intelligence, the “Future of Life Institute” was formed to question and confront the potential existential dangers to human perpetuation posed by superintelligent machinery. Influencers in this community include leaders from a wide range of academic backgrounds including computer science, cosmology, economics, and more.

Influencers:

  • Stuart J. Russell is the founder of the Center for Human-Compatible Artificial intelligence and professor of computer science at the University of California, Berkeley, adjunct professor of neurological surgery at the University of California at San Francisco and computer scientist. He earned a B.A. in physics from Wadham College at Oxford and a Ph.D. in computer science from Stanford University. He is best known as the co-author of Artificial Intelligence: A Modern Approach, which is the most popular textbook on the subject. He is an active researcher in the field, exploring the history and future of artificial intelligence, machine learning, knowledge representation, probabilistic reasoning, inverse reinforcement learning and multitarget tracking. A vocal opponent of the creation and use of autonomous weapons such as unmanned drones, he worked with the Future of Life Institute to produce a video about drones carrying out assassinations in order to get the attention of the United Nations’ governing bodies on conventional weapons.
  • Anthony Aguirre is a theoretical cosmologist. Aguirre is a professor and holds the Faggin Presidential Chair for the Physics of Information at the University of California, Santa Cruz. He is the co-founder and associate scientific director of the Foundational Questions Institute and is also a co-founder of the Future of Life Institute. In 2015, he co-founded the aggregated prediction platform Metaculus with Greg Laughlin. In 2019, he published the pop science book Cosmological Koans.
  • Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher and writer best known for popularizing the idea of friendly artificial intelligence. He is a co-founder and research fellow at the Machine Intelligence Research Institute, a private research nonprofit based in Berkeley, California. His work on the prospect of a runaway intelligence explosion was an influence on Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. An autodidact, Yudkowsky did not attend high school or college.

Algorithmic Bias

In addition to far-reaching concerns about the existential threat posed by intelligent machines, there are immediate and practical concerns about the impact of AI-based technology already in use today. To this end, the concept of algorithmic bias suggests that machine learning can be informed by the root biases of those programming or using such algorithms. Influencers in this area may be found at the practical intersection of computer science and sociology.

Influencers:

  • Sara Wachter-Boettcher is an author, consultant and speaker. She is the author of Technically Wrong and Content Everywhere and the co-author, with Eric Meyer, of Design for Real Life. Her book Technically Wrong: Sexist Apps, Biased Algorithms, and Other Threats of Toxic Tech was recommended by Wired magazine as one of the best tech books in 2017 and by Fast Company as one of the best business and leadership books in 2017.
  • Timnit Gebru is an Eritrean American computer scientist and the technical co-lead of the Ethical Artificial intelligence Team at Google. She works on algorithmic bias and data mining. She is an advocate for diversity in technology and is the cofounder of Black in AI, a community of black researchers working in artificial intelligence.
  • Sofia Charlotta Olhede is a British mathematical statistician known for her research on wavelets, graphons, and high-dimensional statistics and for her columns on algorithmic bias. She is a professor of statistical science at the École Polytechnique Fédérale de Lausanne.

Transhumanism

Transhumanism refers to a particularly optimistic outlook on artificial intelligence, one that emphasizes the perceived benefits of combining computer technology with human biology for practical purposes. This philosophical school of thought points to the opportunities for the enhancement of human intellect and physiology through integration with sophisticated technology. While this philosophy addresses the dangers and ethical quandaries around artificial intelligence, its adherents—largely philosophers, futurists, and technologists—believe technological enhancement is contributing to the evolution of the human species.

Influencers:

  • Raymond Kurzweil is an American inventor and futurist. He is involved in fields such as optical character recognition , text-to-speech synthesis, speech recognition technology, and electronic keyboard instruments. He has written books on health, artificial intelligence, transhumanism, the technological singularity, and futurism. Kurzweil is a public advocate for the futurist and transhumanist movements and gives public talks to share his optimistic outlook on life extension technologies and the future of nanotechnology, robotics, and biotechnology.
  • Benedikt Paul Göcke is a German philosopher and theologian. He is University Professor for the Philosophy of Religion and Philosophy of Science at the Catholic Theological Faculty of the Ruhr University Bochum and an associate member of the Faculty of Theology and Religion at the University of Oxford. His research includes theoretical, practical and historical philosophy and can be divided into three main areas: philosophy of science and metaphysics, transhumanism and ethics of digitization, and German Idealism, in particular the philosophy of Karl Christian Friedrich Krause.
  • Hans Peter Moravec is an adjunct faculty member at the Robotics Institute of Carnegie Mellon University. He is known for his work on robotics, artificial intelligence, and writings on the impact of technology. Moravec also is a futurist with many of his publications and predictions focusing on transhumanism. Moravec developed techniques in computer vision for determining the region of interest in a scene.
Back to Top

Influential Organizations Involved in the Artificial Intelligence Controversy

If you would like to study this topic in more depth, check out these key organizations...

Advocates of Artificial Intelligence

Critics of Artificial Intelligence

A Further Examination of Artificial Intelligence: It’s the Future…and the Future is Now!

I has quickly integrated into our daily lives. In a study by Statista, the AI market is expected to grow up to 54% every year. But will it serve and do good to humankind in the future? And what are the pros and cons of artificial intelligence?

The Advantages of Artificial Intelligence

Reduction in Human Error

This is the biggest advantage of artificial intelligence. AI can reduce errors and improve precision and accuracy.

Decisions made by AI are based on a set of algorithms—algorithms that have been done, redone, and improved. Over time, errors are reduced and the action or program reaches a level of perfection.

Zero Risks

AI robots can do things and go places that humans are not able to. Examples include going into outer space, defusing a bomb, or exploring the deepest parts of the oceans. Robots can also perform tasks with greater precision and don’t tire as a human performing a task would.

24x7 Availability

Studies prove that humans can only be productive for about 3-4 hours daily. But robots are able to work without a break.

They can also make decisions and calculations more quickly than humans and complete tasks with more accuracy. Robots are also able to complete dull, boring, and repetitive jobs without fatigue.

Digital Assistance

It is very common today for websites to use digital assistants to support their customers. Chatbots are perfect examples of this type of digital assistant. Chatbots have improved so much over the years, that it is sometimes difficult to recognize whether you are talking to a human or to a chatbot.

New Inventions

AI is the powerhouse behind various inventions and innovations in practically every field imaginable. For example, the latest improvements in AI-based technologies now allow doctors to spot breast cancer at an earlier stage.

Unbiased Decisions

Whether we admit it or not, humans decide mostly based on emotions. But unlike humans, AI does not have biased views, which equates to more accurate and rational decision-making.

Disadvantages of Artificial Intelligence

Costly

Creating a machine that can replicate human intelligence is not a small feat. The development requires significant resources, time, and money.

No creativity

AI machines work according to how they are programmed. They cannot “think outside the box.”

While these machines can learn over time with past experiences and pre-fed data, they cannot be creative in their approach. One example is the bot Quill, capable of writing Forbes earning reports.

The reports are factually correct, which sounds very impressive on the surface. However, the reports lack that classic human touch usually notable in other Forbes articles.

Unemployment

Robots can potentially displace workers in certain occupations and increase unemployment. Some even claim that robots and chatbots may replace human workers altogether.

Interested in building toward a career on the front lines of the exploration or regulation of artificial intelligence? As you can see, there are many different avenues into this far-reaching issue. Use our Custom College Ranking to find:

Interested in diving into another one of our controversial topics? Check out The 30 Most Controversial Topics Today!

Visit our Study Guide Headquarters for tips, tools, and much more.

See our Resources Guide for much more on studying, starting your job search, and more.

Photo by Hitesh Choudhary on Unsplash

Do you have a question about this topic? Ask it here