TECHNOLOGY

Elon Musk Fires Back at Harvard Psychologist Stephen Pinker Over the Future of Artificial Intelligence

Who’s right in this war of words? It comes down to timing.

Share on
BY Melissa Schilling - 03 Mar 2018

Elon Musk Fires Back at Harvard Psychologist Stephen Pinker Over the Future of Artificial Intelligence

PHOTO CREDIT: Getty Images

Elon Musk has frequently expressed concerns about artificial intelligence (AI), noting for example that robots will be able to do everything better than humans, and machines could start a war. Harvard cognitive psychologist Steven Pinker challenged Musk's perspective this week in Episode 296 of Geek's Guide to the Galaxy, arguing that such concerns are as overblown as the dire predictions of the Y2K bug. He also questioned whether Musk's concerns were authentic, saying "If Elon Musk was really serious about the AI threat he'd stop building those self-driving cars, which are the first kind of advanced AI that we're going to see," and adds later, "Hypocritically he [Musk] warns about AI but he's the world's most energetic purveyor of it."

Musk fired back with a tweet:

Musk's point is that autonomous cars are (for now at least) using "weak" or "narrow" artificial intelligence. This refers to software programmed to follow rules to achieve a narrowly-defined task. Some programs can learn to improve at their task (just as Cortana gets better at understanding your voice commands and the Google Search algorithm gets better matches for queries over time), but the programs do not get to change their objective; they only have the objective for which they were built. Weak artificial intelligence is all around us--it deploys your airbag in a car crash, it turns off the dryer when the clothes are dry enough, and more.

The risk, Musk argues, comes from artificial general intelligence--also called "strong" artificial intelligence. An artificial general intelligence system is supposed to think much like a person. It is supposed to have consciousness and self-awareness. This self-awareness should, theoretically at least, make it want things of its own choosing, and it might thus change its objective to something not intended or anticipated by its creator.

In reality, "weak" and "strong" artificial intelligence are not binary categories; the open-endedness of an objective is a continuum, and artificial intelligence systems are being built with wider capability sets each day. Pinker is thus right in one sense: the programs running an autonomous vehicle are on that continuum, and as we advance their technologies to enable them to handle an increasingly wide range of scenarios, the artificial intelligence in them may become increasingly general.

Pinker is an optimist. He doesn't fear artificial intelligence because he has faith in humanity's ability to tame it. After all, he argues, as civilizations advance they become more peaceful and tolerant. Furthermore, artificial intelligence systems will not evolve with the same Darwinian natural selection processes that created humans; they will be designed by engineers.

There is, however, an interesting inconsistency in Pinker's logic. Earlier in the same Geek's Guide to the Galaxy podcast, Pinker argues that evolutionary processes are ubiquitous, and thus we should expect that aliens might be a lot like us. Pinker assumes that aliens will be friendly, but evolutionary processes do not, in general, create inter-species friendliness. There may be some evolutionary advantages of altruism among closely related individuals or groups within a species, but evolution provides few advantages to generosity between less related groups, and even fewer between different species. Right now on our planet, most members of the human species consider most of the other species as food. Pinker must be counting on aliens overcoming the urge to eat other species before they reach Earth, but if the arc of human progress in the past is any indication, we will reach other planets before we progress beyond considering other species as food.

What does this have to do with artificial general intelligence systems? Artificial general intelligence is like an alien species that will grow up among us. They may learn to eat us or otherwise subjugate us before they have evolved to Pinker's ideal. It may be possible to create artificial intelligence systems that evolve without the competition and reproduction imperatives of Darwinian natural selection, but if we want that (and we probably do), then we need to be working on that now.

Currently, many software models of learning are based on evolutionary algorithms; these processes make sense to us because they created us. It's hard to imagine a fundamentally different model of progress. But this means we are building the competitive drive into systems that may ultimately outcompete us. This is at the heart of Musk's fear.

Which is more accurate -- Musk's fears or Pinker's optimism -- really comes down to timing. Pinker implicitly believes we will figure out how to make AI systems have objectives that are different from our own drives -- in essence making artificial intelligence kinder, and more enlightened than us -- before those systems start to determine their own objectives. Musk implicitly believes that we will invent AI systems that learn to determine their own objectives before we figure out how to ensure those objectives aren't at odds with our own. It's an interesting race. We need to figure out how to program machines with enlightenment before we program machines with consciousness.

inc-logo Join Our Newsletter!
The news all entrepreneurs need to know now.