Nick Bostrom is the founding director of the Future of Humanity Institute at Oxford University, the Oxford Martin Programme on the Impacts of Future Technology, and a philosopher at Oxford. He has earned a B.A. from the University of Gothenburg, an M.A. from Stockholm University, an M.Sc. from King’s College of London, and a Ph.D. from the London School of Economics.
Bostrom is best known for his work on superintelligence, human enhancement ethics, the anthropic principle and existential risk. He has also written two major books: Anthropic Bias: Observation Selection Effects in Science and Philosophy and Superintelligence: Paths, Dangers, Strategies. Superintelligence was particularly well-received, honored as a New York Times bestseller list and promoted by top minds such as Bill Gates and Elon Musk.
Ranked as among the top thinkers in the world, Bostrom has warned of the dangers of superintelligence, which he feels could become an extreme threat within a remarkably short time after development. In his view, a superintelligence could use agent-like behavior to infiltrate existing systems and do so without arousing suspicions.
Bostrom and other major thinkers in the field contributed to a letter with 23 principles of AI safety. These guidelines continue to provide guardrails for the development of artificial intelligence.
Featured in Top Influential Computer Scientists Today
Federal research university in London, Englandview profile
Collegiate research university in Oxford, Englandview profile
University in Gothenburg, Swedenview profile