SAN FRANCISCO — Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.
So, on Nov. 19, 2014, Zuckerberg, Facebook’s chief executive, invited Musk to dinner at his home in Palo Alto, California. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.
As they ate, the Facebook contingent tried to convince Musk that he was wrong. But he wasn’t budging. “I genuinely believe this is dangerous,” Musk told the table, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s AI lab.
Musk’s fears of AI, distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. (See: “The Terminator,” “The Matrix,” and “2001: A Space Odyssey.”) Let’s for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.
Neither Musk nor Zuckerberg would talk in detail about the dinner, which has not been reported before, or their long-running AI debate.
The creation of “superintelligence” — the supersmart technological breakthrough that takes AI to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (like self-driving cars) but can actually outthink humans — still feels like science fiction. But the fight over the future of AI has spread across the tech industry.
More than 4,000 Google employees recently signed a petition protesting a $9 million AI contract the company had signed with the Pentagon — a deal worth chicken feed to the internet giant but deeply troubling to many artificial intelligence researchers at the company. This month, Google executives, trying to head off a worker rebellion, said they wouldn’t renew the contract when it expires next year.
Artificial intelligence research has enormous potential and enormous implications, as both an economic engine and a source of military superiority. The Chinese government has said it is willing to spend billions in the coming years to make the country the world’s leader in AI, while the Pentagon is aggressively courting the tech industry for help. A new breed of autonomous weapons can’t be far away.
All sorts of deep thinkers have joined the debate, from a gathering of philosophers and scientists held along the central California coast to an annual conference hosted in Palm Springs, California, by Amazon’s chief executive, Jeff Bezos.
“You can now talk about the risks of AI without seeming like you are lost in science fiction,” said Allan Dafoe, a director of the governance of AI program at the Future of Humanity Institute, a research center at the University of Oxford that explores the risks and opportunities of advanced technology.
And the public roasting of Facebook and other tech companies over the past few months has done plenty to raise the issue of the unintended consequences of the technology created by Silicon Valley.
In April, Zuckerberg spent two days answering questions from members of Congress about data privacy and Facebook’s role in the spread of misinformation before the 2016 election. He faced a similar grilling in Europe last month.
Facebook’s recognition that is was slow to understand what was going on has led to a rare moment of self-reflection in an industry that has long believed it is making the world a better place, whether the world likes it or not.
Even such influential figures as the Microsoft founder Bill Gates and the late Stephen Hawking have expressed concern about creating machines that are more intelligent than we are. Even though superintelligence seems decades away, they and others have said, shouldn’t we consider the consequences before it’s too late?
“The kind of systems we are creating are very powerful,” said Bart Selman, a Cornell University computer science professor and former Bell Labs researcher. “And we cannot understand their impact.”
Linking Brains and Machines
Asilomar is a hotel and conference center in Pacific Grove, California. A group of geneticists gathered there in the winter of 1975 to discuss whether their work — gene editing — would end up harming the world. In January 2017, the AI community held a similar discussion in the beachside grove.
The private gathering was organized by the Future of Life Institute, a think tank built to discuss the existential risks of AI and other technologies.
The heavy hitters of AI were in the room — among them LeCun, the Facebook AI lab boss who was at the dinner in Palo Alto, and who had helped develop a neural network, one of the most important tools in artificial intelligence today. Also in attendance were Nick Bostrom, whose 2014 book, “Superintelligence: Paths, Dangers, Strategies,” had an outsize — some would argue fearmongering — effect on the AI discussion; Oren Etzioni, a former computer science professor at the University of Washington who had taken over the Allen Institute for Artificial Intelligence in Seattle; and Demis Hassabis, who heads DeepMind, an influential Google-owned AI research lab in London.
And so was Musk, who in 2015 had donated $10 million to the institute in Cambridge, Massachusetts. That year, he also helped create an independent artificial intelligence lab, OpenAI, with an explicit goal: create superintelligence with safeguards meant to ensure it won’t get out of control. It was a message that clearly aligned him with Bostrom. “Worth reading Superintelligence by Bostrom,” Musk tweeted in 2014. “We need to be super careful with AI. Potentially more dangerous than nukes.”
On the second day of the retreat, Musk took part in a panel dedicated to the superintelligence question. Each panelist was asked if superintelligence was possible. As they passed the microphone down the line, each said, “Yes,” until the microphone reached Musk. “No,” he said. The small auditorium rippled with knowing laughter. Everyone understood that Musk thought superintelligence was not only possible, but dangerous.
Musk later added, “We are headed toward either superintelligence or civilization ending.”
At the end of the panel, Musk was asked how society could best live alongside superintelligence. What we needed, he said, was a direct connection between our brains and our machines. A few months later, he unveiled a startup, called Neuralink, backed by $100 million that aimed to create that kind of so-called neural interface by merging computers with human brains.
Warnings about the risks of artificial intelligence have been around for years, of course. But few of those Cassandras have the tech cred of Musk. Few, if any, have spent as much time and money on it. And perhaps none has had as complicated a history with the technology.
Just a few weeks after Musk talked about his AI concerns at the dinner in Zuckerberg’s house, Musk phoned LeCun, asking for the names of top AI researchers who could work on his self-driving car project at Tesla. (That autonomous technology was in use at the time of two fatal Tesla car crashes, one in Florida in May 2016 and the other in March of this year.)
During a recent Tesla earnings call, Musk, who has struggled with questions about his company’s financial losses and concerns about the quality of its vehicles, chastised the news media for not focusing on the deaths that autonomous technology could prevent — a remarkable stance from someone who has repeatedly warned the world that AI is a danger to humanity.
The Tussle in Palm Springs
There is a saying in Silicon Valley: We overestimate what can be done in three years and underestimate what can be done in 10.
On Jan. 27, 2016, Google’s DeepMind lab unveiled a machine that could beat a professional player at the ancient board game Go. In a match played a few months earlier, the machine, called AlphaGo, had defeated the European champion Fan Hui — five games to none.
Even top AI researchers had assumed it would be another decade before a machine could solve the game. Go is complex — there are more possible board positions than atoms in the universe — and the best players win through intuition. Two weeks before AlphaGo was revealed, LeCun said the existence of such a machine was unlikely.
A few months later, AlphaGo beat Lee Sedol, the best Go player of the last decade. The machine made moves that baffled human experts but ultimately led to victory.
Many researchers, including the leaders of DeepMind and OpenAI, believe the kind of self-learning technology that underpins AlphaGo provided a path to “superintelligence.” And they believe progress in this area will significantly accelerate in the coming years.
OpenAI recently “trained” a system to play a boat-racing video game, encouraging it to win as many game points as it could. It proceeded to win those points but did so while spinning in circles, colliding with stone walls and ramming other boats.
It’s the kind of unpredictability that raise grave concerns about the rise of AI, including superintelligence.
But the deep opposition to these concerns was on display in March at an exclusive conference organized by Amazon and Bezos in Palm Springs.
One evening, Rodney Brooks, a roboticist at the Massachusetts Institute of Technology, debated the potential dangers of AI with neuroscientist, philosopher and podcaster Sam Harris, a prominent voice of caution on the issue. The debate got personal, according to a recording obtained by The Times.
Harris warned that because the world was in an arms race toward AI, researchers may not have the time needed to ensure superintelligence is built in a safe way.
“This is something you have made up," Brooks responded. He implied that Harris’ argument was based on unscientific reasoning. It couldn’t be proven right or wrong — a real insult among scientists.
“I would take this personally, if it actually made sense,” Harris said.
A moderator finally ended the tussle and asked for questions from the audience. Etzioni, the head of the Allen Institute, took the microphone. “I am not going to grandstand,” he said. But urged on by Brooks, he walked onto the stage and laid into Harris for 3 minutes, saying that today’s AI systems are so limited, spending so much time worrying about superintelligence just doesn’t make sense.
The people who take Musk’s side are philosophers, social scientists, writers — not the researchers working on AI, he said. Among AI scientists, the notion that we should start worrying about superintelligence is “very much a fringe argument.”
Going to Washington
Since their dinner three years ago, the debate between Zuckerberg and Musk has turned sour. Last summer, in a live Facebook video, Zuckerberg called Musk’s views on AI “pretty irresponsible.”
Panicking about AI now, so early in its development, could threaten the many benefits that come from things like self-driving cars and AI health care, he said.
“With AI especially, I’m really optimistic,” Zuckerberg said. “People who are naysayers and kind of try to drum up these doomsday scenarios — I just, I don’t understand it.”
In other words: You’re getting ahead of reality, Elon. Relax.
Musk responded with a tweet. “I’ve talked to Mark about this,” Musk wrote. “His understanding of the subject is limited.”
In April, Zuckerberg testified before Congress, explaining how Facebook was going to fix the problems it had helped create.
One way to do it? By leaning on artificial intelligence. But in his testimony, Zuckerberg acknowledged that scientists haven’t exactly figured out how some types of artificial intelligence are learning.
“This is going to be a very central question for how we think about AI systems over the next decade and beyond,” he said. “Right now, a lot of our AI systems make decisions in ways that people don’t really understand.”
Tech bigwigs and scientists may mock Musk for his Chicken Little routine on AI, but they seem to be moving toward his point of view.
Inside Google, a group is exploring flaws in AI methods that can fool computer systems into seeing things that are not there. Researchers are warning that AI systems that automatically generate realistic images and video will soon make it even harder to trust what we see online. Both DeepMind and OpenAI now operate research groups dedicated to “AI safety.”
Hassabis, the founder of DeepMind, still thinks Musk’s views are extreme. But he said the same about the views of Zuckerberg. The threat is not here, he said. Not yet. But Facebook’s problems are a warning.
“We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,” Hassabis said. “The time we have now is valuable, and we need to make use of it.”
This article originally appeared in The New York Times.