David Remnick: This is The New Yorker Radio Hour. I'm David Remnick. We're talking today about the promise and the danger of artificial intelligence. Computer scientist Yoshua Bengio began working on AI in the '80s and the '90s, and he's been called one of the godfathers of AI. Bengio focused specifically on neural networks. That's the idea that software can somehow mimic how the brain functions and learns. The brain itself is a kind of network.
Now, at the time, most scientists thought this would never really work out, but Bengio and a few others persevered. Their research led to advances in voice recognition and robotics and much more. In 2018, Bengio received the Turing Award, kind of the Nobel Prize of Computing, alongside Geoffrey Hinton and another colleague. ChatGPT is also built on the foundation that Bengio helped to build. It's a neural network, but Bengio, instead of celebrating this remarkable achievement in his field, has had quite a different reaction.
In March, a group of very prominent people in tech signed an open letter that said that all AI producers should stop training their systems for at least six months, and you signed that letter. Even Elon Musk, who's not known for his overweening sense of caution, also signed the letter. Please tell me how that letter came about and what was the motivation.
Yoshua Bengio: We saw the unexpected and rapid rise of the abilities of AI systems like ChatGPT and then GPT-4. We didn't ask to stop every AI research, development and deployment, only those very powerful systems that are of concern to us. I believe there is a non-negligible risk that this kind of technology in the short term could disrupt democracies. In the coming years, with advances that people are working on, could yield to loss of control of AI, which could have potentially even more catastrophic impacts.
David Remnick: I just spoke to Sam Altman, and I asked him about what seems to be the most frightening concern of all that an AI entity could basically become a sentient creature that could rewrite its own source code, and somehow, as if in a horrifying science fiction movie, break free from human control. Altman assured me this is very unlikely. What do you think?
Yoshua Bengio: Did he say it was unlikely with the current systems or in the future?
David Remnick: With the current systems to be sure?
Yoshua Bengio: Yes, so I agree with him.
David Remnick: What about in the future?
Yoshua Bengio: For the future, yes, there is a real risk. It's a risk we don't understand well enough. In other words, you could see experts, like my friend Yann LeCun, saying one thing, and other experts, like Geoffrey Hinton and I, saying the opposite. The scenarios by which bad things can happen haven't been sufficiently discussed and studied. A lot of people talk about AI alignment. In other words, the fact that you may ask a machine to do something, and it could act in a different way that could be dangerous.
There is an alignment problem of a different kind between what's good for society and the general well-being of people, and what companies are optimizing, which is profit under constraints of being legal. It's actually interesting because I find that as an inspiration to better understand what can go wrong with AI. You could think of corporations as special kind of intelligences that are not quite completely artificial because they're human beings in there but that can behave in a similar way.
We try to bring corporations back into alignment with what society needs, with all kinds of laws and regulations. In particular, in the case of AI, I think we need regulatory framework that's going to be very adaptive, because technology moves quickly, science moves quickly. We don't want congress or parliaments and other countries to be the ones dictating the details. They want to assign some more professional body that are not politicians, but they are experts to find the best ways to protect the public.
David Remnick: Well, how would you and Geoffrey Hinton and others describe a very bad outcome? What is the scenario that you envision that's at least possible and unpredictable and dangerous?
Yoshua Bengio: Imagine that in a few years, scientists figure out how to build an AI system that would be autonomous and could be dangerous for humanity because it would have its own goals that may conflict with ours. Maybe we even also have figured out how we can build safe AI that wouldn't behave like this. The problem is we have that choice. Maybe those scientists in the labs would choose the good AI solution, but there could be somebody anywhere in the world, if they have access to the required compute, which right now isn't that much you can take. Think about ChatGPT. You don't need to retrain it. You just need to give it the right instructions. Anybody can do that.
David Remnick: What is the scenario that you see in specific terms as a possibility that you are trying to prevent?
Yoshua Bengio: There's an organization called Auto-GPT, which arose just in the last few weeks or months, that made it possible to turn something that has very little or no agency, like ChatGPT, into something that actually pursues goals that the human would type, but then creating its own sub-goals to achieve those. It's increasing the chances of an AI system becoming dangerous for humanity because they're connecting, for example, that system to the internet. It could ask people to do things for money through existing facilities for this. If we had, instead of ChatGPT, something that's smarter than humans, which may arrive in as few as five years, I don't know, then that could become catastrophic.
David Remnick: You've raised the idea of AI being exploited in military use. How should the military use artificial intelligence, if at all? What are the dangers there?
Yoshua Bengio: Well, the danger is, first that we are putting a lot of difficult moral decisions in the hands of machines that may not have the same understanding of what is right and wrong as we do. You may know about the story of the Russian officer who decided not to press on the button in spite of the instructions that would have led to probably catastrophic nuclear exchange because he thought it was wrong, and it was a false alarm.
If we build AI systems with agency and autonomy, and they can kill because there are weapons, it just makes the likelihood of something really catastrophic happening larger. Let's say, Putin wants to destroy Western Europe and take advantage of AI technology to do it in a way that might not be possible otherwise. If AIs are embedded into the weapons, the military, then it just gets easier to have large-scale dangerous impacts.
David Remnick: I have to ask you, you've been working in this field for many years. Why is it suddenly--
Yoshua Bengio: It's decades.
David Remnick: Decades. Suddenly everybody's very concerned about it. There have been rumblings about it over the year, not only in the field but beyond the field. It's exploded this level of concern. What happened, and why wasn't it foreseen a little earlier?
Yoshua Bengio: Well, it was foreseen by some, as you said, a marginal group. There's the fact that most of us in AI research did not expect that we would get to the level of competence that we seem to see in the ChatGPT and GPT-4. We expected something like that level to come maybe in 10 or 20 years, and the human level of intelligence to come maybe in 50 years. Our horizon for risk just got much shorter.
If you're working on a topic, it's more psychologically comfortable to think that this is good for humanity than to think, oh gee, this could be really destructive. We have these natural defenses as part of the problem with humans. We're not always rational.
David Remnick: Is there a possibility that AI leads to an even greater disparity, social disparity, income disparity? What prevents a scenario where the benefits of AI are concentrated among a very small slice of the population and vast numbers of people experience dislocation, unemployment, and actually get poorer?
Yoshua Bengio: In general, if you think about what AI is, it's just a very powerful tool. If you just think about having very powerful tools, it can clearly be used by people who have power to gain even more power. What prevents that tends to be governments, taxation, for example, and services offered by governments to everyone, and so on, so as to balance things out.
David Remnick: Are you concerned that the warnings coming from Geoff Hinton, from Steve Wozniak, come across to some people as the warnings of an old guard complaining about a new generation of scientists?
Yoshua Bengio: No. My students are concerned, and there are young people who are concerned. I think that the battle that is shaping up in a way has a lot of points in common with the concerns and the requirement for policies about climate change. A lot of young people are fighting to preserve the interest of future generations. I think something similar is at stake with AI.
David Remnick: One of the confounding things about confronting the climate emergency is the requirement for coordinated international effort. With AI, you not only have that but you also have I think it's fair to say a level of understanding of the basics of AI that's very low. In other words, people can understand dried-up rivers, raising temperatures, rising sea levels, and all the rest. The complications of AI and predicting those complications are even more complex, don't you think?
Yoshua Bengio: I think what may bring countries to this international table that is needed indeed is their self-interest in avoiding catastrophic outcomes where everyone loses. A good analogy is what happened after the Second World War between mostly the US and the USSR, and then to some extent, China, to come up with agreements to reduce the risks of nuclear Armageddon. I think in good part, thanks to these international agreements that it has been okay.
David Remnick: The comparison is to arms control, nuclear arms control.
Yoshua Bengio: Yes. It's not exactly the same thing, but I think it's a good model.
David Remnick: Mr. Bengio, thank you so much. I appreciate your time.
Yoshua Bengio: My pleasure. Thanks for having me.
David Remnick: Yoshua Bengio is the scientific director of the Montreal Institute for Learning Algorithms.
[music]
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.