How To Thwart A Robot Apocalypse: Oxfords Nick Bostrom on the Dangers of Superintelligent Machines
If we one day develop machines with general intelligence that surpasses ours, they would be in a very powerful position, says Nick Bostrom, Oxford professor and founding director of the Future of Humanity Institute. Bostrom sat down with Reason science correspondent Ron Bailey to discuss his latest book, Superintelligence: Paths, Dangers, Strategies, in which he discusses the risks humanity will face when artificial intelligence (AI) is created. Bostrom worries that, once computer intelligence excedes ou br, br,
|
|