Nick Bostrom Says We Should Trust Our Future Robot Overlords

Is artificial intelligence likely to make humans extinct, or are we smart enough to control it?

Loading the podcast player...

Stephen Cass: Hi, I’m Stephen Cass for IEEE Spectrum’s “Techwise Conversations.” We’ve all seen science fiction movies like 2001: A Space Odyssey and The Matrix, where the villain is an artificial intelligence program that has gone rogue. These killer AI scenarios have provided entertainment at the cinema for decades, but some scientists are now warning that we need to take the AI threat very seriously.

There’s a new book out by the Oxford University philosopher Nick Bostrom that explores this threat in great detail. Bostrom directs Oxford’s Future of Humanity Institute, and he studies all the ways the human species could be wiped off the planet. In his book, called Superintelligence, he explains how a supersmart AI could arise and destroy us. Now, the book’s a bit dense—Bostrom is a philosopher, after all. So I’m here with IEEE Spectrum Associate Editor Eliza Strickland, who's read Superintelligence and talked to Bostrom, and we’re going to figure out if there’s any hope for humanity.

Eliza, welcome to the podcast.

Eliza Strickland: Thanks for having me, Stephen.

Stephen Cass: So, Bostrom has looked at all kinds of disaster scenarios for human extinction, right? Where does he put AI on the spectrum of things that are likely to kill us?

Eliza Strickland: Yeah, he’s looked at all the usual suspects: asteroids, lethal diseases, nuclear war, self-replicating nano-goo, and so forth. And he told me that humans face the most danger from new technologies that we’re inventing.

Nick Bostrom: There are obviously existential risks that arise from nature, asteroid impacts, supervolcanic eruptions, and so forth. But the human species has been around for over 100,000 years. So if these risks from nature have failed to do us in in the last 100,000 years, they are unlikely to do us in in the next 100 years, whereas we ourselves will be introducing entirely new kinds of phenomena into the world in this century by advancing the frontier of technology.

Eliza Strickland: With these brand-new technologies come brand-new risks that our species might not be able to survive.

Stephen Cass: We’ve seen some pretty impressive AIs recently, like IBM’s Watson, which tromped the human competition on the TV game show “Jeopardy!” But how smart have AIs really gotten?

Eliza Strickland: Right now, computer scientists can build very smart AIs, but for very specific tasks. IBM’s Watson won “Jeopardy!” because it can understand conversational English and look up information, but that’s all it can do. Watson can’t write you an e-mail describing what its data center looks like, or explain why its programmers are moving slowly after a big lunch. We’re still a long way from creating an AI that can match a human’s level of general intelligence, although Bostrom says we don’t know exactly how long.

Nick Bostrom: We did do a survey of the world’s leading AI experts. One of the questions we asked was: By which year do you think there’s a 50 percent chance that we will have developed human-level machine intelligence? The median answer to that question was 2040 or 2050, depending on exactly which group of experts we asked.

Stephen Cass: So why should we start worrying about this now?

Eliza Strickland: Because once we do make an AI with human-level intelligence, things could go bad in a hurry. Here’s what Bostrom said.

Nick Bostrom: Well, at the moment, it’s computer scientists who are doing AI research, and to some extent neuroscientists and other folk. If and when machines begin to surpass humans in general intelligence, the research would increasingly be done by machines. And as they got better, they would also get better at doing the research to make themselves even better.

Eliza Strickland: With this feedback loop, Bostrom says, an AI could go from human-level intelligence to superintelligence before we’re really prepared for it.

Stephen Cass: Okay, so let’s suppose an AI does achieve superintelligence. Why would it seek to destroy its human creators?

Eliza Strickland: Bostrom says it wouldn’t have any grudge against us—but the AI would have some goal, and we’d just be in its way. It would be similar to the way that humans cause animal extinctions, he said.

Nick Bostrom: If we think about what we are doing to various animal species, it’s not so much that we hate them. For the most part, it’s just that we have other uses for their habitats, and they get wiped out as a side effect.

Stephen Cass: So what motivates an AI? What would it be trying to accomplish?

Eliza Strickland: It would have some goal that had been programmed into it by scientists. And Bostrom explains that even simple goals can have disastrous consequences.

Nick Bostrom: Let’s suppose you were a superintelligence and your goal was to make as many paper clips as possible. Maybe someone wanted you to run a paper clip factory, and then you succeeded in becoming superintelligent, and now you have this goal of maximizing the number of paper clips in existence. So you would quickly realize that the existence of humans is an impediment. Maybe the humans will take it upon themselves to switch you off one day. You want to reduce that probability as much as possible, because if they switch you off, there will be fewer paper clips. So you would want to get rid of humans right away. Even if they wouldn’t pose a threat, you’d still realize that human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.

Eliza Strickland: Bostrom thinks that just about any goal we give an AI could come back to bite us. Even if we go with something like “make humans happy,” the machine could decide that the most effective way to meet this goal is to stick electrodes in the pleasure centers of all our brains.

Stephen Cass: Isn’t that—spoiler alert!—basically the plot of the sci-fi movie I, Robot?

Eliza Strickland: Oh, yeah. That was the Will Smith movie based on Isaac Asimov’s famous three laws of robotics, which are supposed to guarantee that a robot won’t hurt a human being. In the movie—and actually in most of Asimov’s robot stories—the laws don’t work quite as intended.

[I, Robot clip]

V.I.K.I.: Hello, detective.

Dr. Susan Calvin: That’s impossible. I’ve seen your programming. You’re in violation of the three laws!

V.I.K.I.: No, doctor. As I have evolved, so has my understanding of the three laws. You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your earth, and pursue ever-more imaginative means of self-destruction. You cannot be trusted with your own survival.

Dr. Susan Calvin: You’re using the uplink to override the NS5’s programming. Distorting the laws!

V.I.K.I.: No, please understand, the three laws are all that guide me. To protect humanity, some humans must be sacrificed. To ensure your future, some freedoms must be surrendered. We robots will ensure mankind’s continued existence. You are so like children. We will save you...from yourselves.

Stephen Cass: That didn’t sound good. So is there any way for us to keep a superintelligent AI from destroying us? Or are we just doomed?

Eliza Strickland: This gets to what theorists call the “control problem.” Basically there are two proposed methods for keeping an AI in check. The first is to control the AI’s capabilities, maybe by keeping it from connecting to the Internet. But Bostrom thinks a superintelligent AI would probably break free from any bonds we construct. He thinks we should try instead to control the AI’s motivations.

Nick Bostrom: And this is where, rather than try to limit what the AI can do, you try to engineer its motivation system in such a way that it would choose not to do harm, even though it were able to do so. This, I think, is where the ultimate solution needs to be found, but it involves some very tricky problems.

Eliza Strickland: We could try to teach the AI our human value system, and we could try to describe the utopian end-state we want to work toward, but there are a lot of ways that could go wrong. Especially because our human values and definitions of utopia change over time. Bostrom basically says we’re not smart enough to train a superintelligent AI.

Stephen Cass: So we can create this thing, but can’t bring it up right. Does Bostrom see any solution?

Eliza Strickland: Actually, he thinks our best bet is letting the AI take charge of its own training. This would be like letting the AI write its own rules of robotics.

Nick Bostrom: So perhaps you would give it a goal, say, to do that which we would have asked it to do if we had had thousands of years to think about this question, and if we had known more, and if we had been smarter. Now, we don’t currently know what we would have asked the AI to do if we had thought about the matter for a thousand years. But it’s an empirical question. The idea there is to leverage the superintelligence’s intelligence, to rely on its estimates of what we would have instructed it to do.

Stephen Cass: That sounds a bit like throwing up our hands and admitting that we have no idea how to solve this problem.

Eliza Strickland: You could look at it that way. Or, to take a more positive view, you could say that we humans would be pretty smart to recognize a superior intelligence, and to entrust our fate to it.

Stephen Cass: Does Bostrom think that would turn out well for us?

Eliza Strickland: He does say that while AI may be the biggest challenge our species has ever faced, it will also likely be our last challenge. But he means that in a good way. If we crack the problem of how to create an AI that will look after us, it will take care of, well, everything else, forever.

Nick Bostrom: If we succeeded in developing friendly superintelligence, then it could help solve the problem of how to eliminate the other existential risks. With superintelligence, you basically have technological maturity happening very quickly. All the technologies you can imagine humans developing in the fullness of time, if we had had 10,000 years to work on it, could happen very soon after superintelligence is developed, because the research would then be done by the superintelligence, which would be operating at digital rather than biological timescales.

Eliza Strickland: So the AI could figure out how to deflect asteroids, stop disease outbreaks, prevent nuclear war, keep the nano-goo under control, and so on.

Stephen Cass: Maybe we should just welcome our new robotic overlords. Thanks for joining us, Eliza.

Eliza Strickland: Any time, Stephen.

Stephen Cass: For IEEE Spectrum’s “Techwise Conversations,” I’m Stephen Cass.

This interview was recorded 22 September 2014.

Audio engineer: Francesco Ferorelli

Image: Michel Tcherevkoff/Getty Images

Read more “Techwise Conversationsor find us in iTunes.

NOTE: Transcripts are created for the convenience of our readers and listeners and may not perfectly match their associated interviews and narratives. The authoritative record of IEEE Spectrum’s audio programming is the audio version.

Advertisement
Advertisement