Geoffrey Hinton: “It’s Far Too Late” to Stop Artificial Intelligence
Geoffrey Hinton: The way the brain works is this. Neurons get inputs. If they get enough input, they go ping. The input to a neuron either comes from the senses, but for most neurons, it's input from other neurons. Neurons receiving these pings from other neurons--
David Remnick: Geoffrey Hinton has been thinking about how brains work for a very long time. Hinton is a computer scientist who's been called the godfather of artificial intelligence, AI. For decades, he worked on building computers that would work in a way analogous to the human brain itself. It's an approach known as neural networks. This was an obscure and seemingly fruitless effort for a long while.
Eventually, it paid off beyond anybody's imagination that work on neural networks led to incredibly intricate machines like DALL-E, which will take your prompts and make you a beautiful picture, or ChatGPT, which, in the last year, put AI on everybody's radar. Well, that was a future that nobody expected. These are machines that learn and perhaps even think.
Geoffrey: -what strength to associated with each incoming ping so that it can decide if it got enough input for it to go ping. That's all there is. That's all you need to know to know how the brain works.
David: It's very clear that an AI revolution is at hand here and now, and it's going to reshape our world profoundly. Geoffrey Hinton, the foremost pioneer of neural networks, has come to have concerns about what AI can do. Very serious concerns. Joshua Rothman, The New Yorker's ideas editor, recently talked with Hinton in-depth and we'll hear some of their conversation today. Josh, The New Yorker has just published an entire issue on artificial intelligence, and at the very center of it is your profile of Geoffrey Hinton. Why is he so important? Why is he such a crucial figure?
Joshua Rothman: He's followed the arc of the tech from the very beginning all the way to now. He's 75. During a period of time when nobody thought this technology would work, he continued to work at it and he believed in it and he's ultimately been proven right. He's now said that he's scared about the tech that he worked on for his whole life. He doesn't regret what he did, but he says we need to be realistic about what's been invented, which is a machine that can think the way we can.
David: Tell me a little bit about what it's like to spend time with Geoffrey Hinton. He's a very emotionally rich as well as intellectually rich personality as you portray him.
Josh: What was Hinton actually like as a person? A delightful person. He's a little bit from a prior world. He's not a Silicon Valley techno overlord. He's not an eccentric egomaniac. He's a highly intelligent, basically humble person who's worked on this technology for a long time, who got used to being regular, a regular computer science professor until he was in his 60s when this technology really started to take off. I was pretty intimidated by Geoff. Our first interaction, he gave me a quiz on various subjects and philosophy of mind, I think, just to confirm that we were going to be on the same wavelength. I have to say, I'm basically a regular Joe. I don't really understand. I did a Khan Academy course on linear algebra. I did some things to get ready.
David: You prepared to do this piece by taking a course online in linear algebra.
Josh: I did. [chuckles]
David: Wow.
Josh: The strangeness of AI is so weird. It combines physics and math and neuroscience and computers. It's like a weird discipline and psychology and ideas about learning and all this kind of stuff.
[music]
David: What is AI and what are its implications? Because they seem so varied, so vast. To some people, so scary. To many, many other people, so filled with possibility.
Josh: I think the question of what it is, is a little bit of a contested one. The best way for me to understand it as a mere mortal is to go back to the beginning. The way this all started was with the idea that our brains are powered by neurons that are connected in a network. That led to a field called neural networks, a field of computer science in which computer scientists would create networks of simulated neurons inside computers.
Back in the day, in the '60s, '70s, '80s, you couldn't do that. That was impossible to simulate in a computer. You could build small networks and they could learn small things. They could learn to recognize handwritten digits, for example, like, say, in a ZIP code on an envelope. Over the last many decades, computers have gotten bigger and bigger. They've gotten literally a billion times faster.
The number of neurons that can be simulated inside a computer has grown by that scale. Now, these neural networks, they're not yet as complicated as the ones in our heads, but they're really complicated. They're capable of doing something that certainly looks from the outside to many people like reasoning. Geoff Hinton thinks that their understanding just like we do, their reasoning just like we do, we should take their mental lives seriously as it were and we should take their intelligence seriously.
David: You spent quite a lot of time with him at his house on an island in Lake Huron. Let's hear some of your interview.
Josh: A lot of people struggle to understand how an AI mind is similar to or different from a human mind. They can't decide or they don't know whether to think of today's AI as similar to the computer programs they've used their whole lives or similar to the people that they converse with. How do you think about that?
Geoffrey: I think today's big AI, things like ChatGPT-4, they're much more similar to people than they are to computer programs. Computers were designed so people could program them. That is they'd do exactly what you said and they didn't have things like intuition. Now, if you look at what it took to make computers good at chess, you had to give computers intuition. A computer had to be able to look at a board and think, "Oh, that will be a good move," the way a grandmaster does. In fact, they're better than grandmasters of that now.
Using neural nets, we could get computers to learn intuitions. That's very different from logical reasoning. When you're an expert in a domain, to begin with, you have reasoning. A real expert in a domain can just do it intuitively. That's what doctors call clinical experience. They just look at this patient and they know what they've got. They didn't do a lot of reasoning. It's just obvious to them. If they're good doctors, occasionally, it turns out the patient hasn't got that. They'll use that to revise their intuitions.
Josh: It seems like when I write-- I am a writer. When I write, I'm the one doing it from the top down using my intelligence. It's not like driving a car or riding a bike. It's something I'm choosing to do and those are my ideas coming out of my ego as it were. That's what I think of when I think of high-level intelligence. It's like me writing an article or writing an email or something. It doesn't necessarily seem like it's connected to the world of learned intuitive behavior.
Geoffrey: If you think when you're writing, suppose you're halfway through a sentence, and now you have to choose the next word, there'll be a word that comes to mind because it fits nicely there. Why did it come to mind and how did you decide it fitted nicely? You have no idea. Retrospectively, you can make up a story, which has probably got some element of truth but definitely not the whole story.
Really, what's happening is these big pans of neural activity that you've learned are, in effect, implementing analogies with lots of different stuff you know so that that word seems right there. Just the process of selecting the next word when you're writing, which you might say is just you doing auto-complete, involves more or less everything you ever learned. To do auto-complete properly, you have to understand how the world works. That's what you are doing. You're just an auto-complete device.
[music]
David: Josh, Hinton is teasing you here. He's saying that your excellent writing is just auto-complete. At least I hope he's teasing because he's wrong. What are the implications of machine with intelligence? What could it bring to society in the most positive sense and what should we fear?
Josh: On the positive side, there's ways in which these digital minds are different from ours and I should say usefully different from ours. If you think about what ChatGPT is, I think a lot of us have used ChatGPT at this point, one of the striking things is it seems to have ready access to a huge amount of knowledge more than we do. It doesn't mean that it's smarter than us exactly. If you ask it to translate something between languages or to try to solve an equation or to discourse on the history of economics, it can do that because these artificial intelligences are really good at working with huge amounts of data.
David: Is this capacity because a machine, that artificial intelligence, can have in its head as it were, all of Google, all of Google Translate, and then begin to work with it, whereas our minds do not have that capacity? Is it a capacity question?
Josh: Yes, it's partly a capacity question as I understand it. I think it's different. You think about what our minds are doing right now, for example. You and I are having this conversation. We're moving towards a deeper level of mutual understanding all the while where maybe there's some part of our brains thinking about what we're going to have for lunch. You're a very busy man. I'm sure you have a lot going on. You're thinking about all that stuff too.
AI also, and this is something that Geoff talked about in my piece that I thought was fascinating. If I learn something, how do I communicate it to you? I have to write a 10,000-word article for The New Yorker magazine and you have to read it and try to stay awake during it. If an AI learns something and it wants to communicate it, it just downloads the information and it can be uploaded into another AI.
The first thing that comes to mind is AIs have mastered a level of conversation and of not just responding to what you say but of understanding your intent. There's a piece in the issue that's about this. That's about how AI is affecting coders. One of the ways that ChatGPT is very powerful is that if you're sufficiently educated about computers and you want to make a computer program and you can instruct ChatGPT in what you want with enough specificity, it can write the code for you.
It doesn't mean that every coder is going to be replaced by ChatGPT, but it means that a competent coder with an imagination can accomplish a lot more than she used to be able to. Maybe she could do the work of five coders. There's a dynamic where people who can master the technology can get a lot more done. There's economic consequences that are real. Then there's the bigger fear, science-fictional fear, which Hinton shares.
Geoffrey: There's a whole bunch of risks that concern me and other people have talked about these much more than I have. I'm a latecomer to worrying about the risks because, very recently, I came to the conclusion that these digital intelligences might already be as good as us. They're able to communicate knowledge between one another much better than we can. That's what made me feel I needed to talk out about the existential threat that these things will get to be smarter than us and take over.
Josh: That's because an AI can just copy its learned knowledge out of itself and give it directly to another AI.
Geoffrey: You can have, say, 10,000 different copies of the same knowledge of the same neural network. Each can be looking at different data. When one copy learns something from one part of the data, it can convey it to all the other copies that haven't seen that data simply by telling them how to update their weights, these synapse strengths inside. Now, you and I can't do that because my brain's wired differently from your brain. If you told me the synapse strengths in your brain, you wouldn't do me any good.
Josh: How does that relate to these set of risks that are--
Geoffrey: Okay, so that relates to the existential threat that these things will become smarter than us, and not just a little bit smarter, but a lot smarter, and will also decide to take over. They'll decide to take control. That's the existential threat.
Josh: Why would they decide to do that?
Geoffrey: A very senior official in the European Commission who I was talking to said, "Well, people have made such massive things. Why wouldn't they?"
[music]
David: Computer scientist Geoffrey Hinton, he's speaking with The New Yorker's Joshua Rothman, and we'll continue in a moment.
[music]
David: One year ago, the future arrived loudly. ChatGPT launched at the end of last November and it was all anybody could think about for a while. Suddenly, artificial intelligence wasn't just a tool for advanced science research, but it was entering all of our lives right down to your kids cheating on their homework. Our current issue of The New Yorker is all about this explosion in artificial intelligence, the mind-boggling advances, and some of the terrifying possibilities.
As part of that project, our ideas editor, Joshua Rothman, sat down with the so-called godfather of AI, Geoffrey Hinton. We'll continue with that conversation now. Now, Hinton has spent a lifetime helping to teach machines how to learn. Now, he believes that he succeeded almost all too well and he's scared of what may happen when machines are smarter than people and have their own ideas about what to do.
Geoffrey: Suppose it's a chess-playing computer. He wants to win the game. It doesn't have anywhere inside it, an ego which thinks, "I want to win the game." It's wired up in such a way that it's trying to win. I think the idea they don't have intentions and they don't have goals is just wrong.
Josh: Is the idea that people in charge of these systems will give them goals, will start us down this path? Is that what we're envisioning or where would the goals come from that would start the whole problem?
Geoffrey: There's two sources of worry and they're very distinct and have quite different solutions. One worry is bad actors. You can probably imagine Putin giving an autonomous lethal weapon, the goal of killing Ukrainians. You can probably imagine he wouldn't hesitate. That's the bad-actor scenario. There's another scenario, which is if you want a system to be effective, you need to give it the ability to create its own sub-goals.
If you want to get back to the US, you're going to need to get to an airport. You have a sub-goal, get to an airport. That's a sub-goal. It's created in order to achieve a bigger goal. If you give an AI some goal and you want it to be effective, it's going to work by creating some sub-goals that'll allow it to achieve the goal. Now, the problem is there's a very general sub-goal that helps with almost all goals and the AI will certainly realize this very quickly.
The very general goal is get more control. If I just get more control, it's going to help with everything. These AIs are going to realize that. Pretty soon, they're going to realize, "Well, if these are my goals, best thing to do is stop humans interfering and just get on with it and do it a sensible way that these stupid humans don't understand." Whatever goals they do have were given to them by us.
A big question called the "alignment problem" is can we give them goals such that they do useful things for us and they never ever want to take over? Nobody knows how to do that. It's no use thinking we could airgap them so they can't actually pull levers or press buttons because they could simply convince us to do it because they're much more intelligent than us.
Josh: Is that a technical problem?
Geoffrey: Yes.
Josh: Then it's also a governance problem.
Geoffrey: It's a technical problem, a governance problem, but we don't even know how to solve the technical problem even if we could do the governance right.
Josh: Even if we could make a law that said, "You're not allowed to make an AI that can go wrong," we wouldn't know how to do that yet.
Geoffrey: Exactly.
Josh: Imagine that you're not a central figure in the history of machine learning, but you are just a regular person. Now, some of the world's biggest companies are saying, "We've developed this technology. It promises all sorts of benefits that you don't really want. You can drive your car. You can do your job. You can do everything you need to do in your life. We don't know how to control it. It might take your job or it might take over. It might help solve some scientific problems that you don't care about." I think that regular person might just say, "Why don't we just unplug it?"
Geoffrey: Why don't we just stop the development?
Josh: Why don't we just unplug it? We don't need this. We already have intelligences, human beings. We don't need artificial ones.
Geoffrey: It's not unreasonable to say we'd be better off without this. It's not worth the risk just as we might have been better off without fossil fuels. We'd have been far more primitive, but it may not have been worth the risk. It's not going to happen. Because of the way society is, because of the competition between different nations, no one nation could stop it.
If you had a very powerful world governance, if the UN really worked, possibly something like that could stop it. Although even then, it's just so useful. It has so much opportunity to do good like in medicine. You just aren't going to stop it. It's also got so much opportunity to give advantage to a nation by autonomous weapons. The US is already developing autonomous weapons. Yes, that might be a sensible move, but it's far too late.
Josh: My last question in this vein is, what should we do?
Geoffrey: I don't know. It would be great if it was like climate change where someone warning about climate change could say, "Look, we either have to stop burning carbon or we have to find an effective way to remove carbon dioxide from the atmosphere." One of those two is essential and you know what the solution looks like. It's just a question of political will to actually implement something because it's going to be painful. Here, it's not like that. I have no advice. All I'm doing is just warning that this may well be coming. Smart young people should be thinking hard about, "Is it possible to prevent it ever wanting to take over?"
[music]
David: Josh, we know the names Bill Gates, Steve Jobs, Alan Turing, but only very recently have we learned the name, Geoffrey Hinton. We've learned it, A, as somebody who is a great innovator in AI, known very commonly as the godfather of AI, but now also as the apostate of AI. Why has he only emerged now?
Josh: It's like the first question you want to ask is it's 2023, so how come, in all the decades, you didn't freak out before? He told me, first, no one thought this would work as quickly as it did. If you rewound the tape to the '80s, the '90s, the 2000s, people thought, "Maybe in 50 years' time." He thought maybe in 100 years' time, we would reach the place where we currently are.
His view was like, "Why worry about it?" [laughs] There's plenty of time to try to sort this out down the road. A lot of people who work in AI have mentioned this to me. It's that when you start using ChatGPT or another modern AI, the first thing you notice is, all the ways, it's bad. The first thing you see is the way it's not really human or the mistakes it makes, the things it's not capable of.
Your early impressions are, "Gee, it's a lot of window dressing." The more time you spend, the more you get impressed. It reminds me a little bit. I have two small kids. I have a five-year-old son. When you spend a lot of time around one kid, you notice every little improvement in their mind. You're always noticing things they're learning. If you just visit with friends who have a kid, you're not impressed. You say, "That's a little child."
[laughter]
David: Josh, in this issue of the magazine, along with your profile of Geoffrey Hinton, there's a piece by Eyal Press about AI and facial identification, which seems extremely fraught, quite dangerous. What's the issue at stake there?
Josh: Well, the issue is an AI that provides "evidence" to the police could very well be wrong. The people who are using these systems might not fully understand how they work or what their limitations are. Once a computer system that appears objective, appears powerful, once it makes a judgment, it can be pretty hard to contravene the judgment. It's a problem in policing. It could be a problem in medicine. Imagine you're a doctor. An AI system delivers a diagnosis. You think it's probably wrong, but are you going to go up against the computer and go on the record and say, "I disagree with the neural network, with billions of artificial neurons running in a huge data center."
You might not say that. The larger question is, as these systems get more useful, as they get more integrated into real-world contexts, and as real people have to start being responsible for them, have to start either disputing what they think or acting on what they think and what the systems think, it's going to require a level of literacy and of nuance around the technology that we're just really not equipped to have at the moment, and that we need to start developing if we don't want really negative consequences.
David: Josh, recently, I had a conversation on this show with Sam Altman, one of the people behind ChatGPT. In a very almost bland way, he said that AI could put millions and millions of people out of work, which would cause the government to have to come around and start giving us all universal basic income while the machines do the work and the thinking. He delivered this all in a very, I have to say, matter-of-fact way. Is this a crucial concern for Geoffrey Hinton?
Josh: He's certainly worried about what will the young people of today do for work How will the world of work be transformed? Well, I guess can I back up and say one thing broadly?
David: Sure.
Josh: There's huge disagreement about this. No one has a crystal ball. Technologists obviously are in love with what they've made. They really see the potential. They live in the future. That's why they do what they do.
David: They're in love with it or they stand to make an immense fortune?
Josh: Yes, and they stand to make immense amounts of money. I think Hinton is a little different from Altman because he doesn't have any real financial skin in the game. He's not a businessman. He's an academic researcher fundamentally, although he worked for Google for a period-
David: -and walked away as I recall with tens of millions of dollars, no?
Josh: Yes, he sold a company to Google in the early days of the current AI boom for $44 million. Job loss is something that Hinton is very worried about, but I think he's more worried about, I guess you could just say, chaos. His view is the technology is out there, bad people will learn how to use it, and the next period of history will be destabilized by this technology. There isn't an obvious way to solve the problem.
David: Isn't that a little easy? We've been through this now with the internet.
Josh: Arguably. He didn't say this to me, but this is an argument many people make. The best path to understanding and controlling AI is simply to keep working on it.
David: That sounds a little fatalistic.
Josh: I think the word he would use is "stoic." I think he sees his position as a realist one. He thinks that these are problems that aren't going to be worked out through people sitting around and talking about them. They're problems that are going to have to be worked out through the actual development of the technology. A lot of people say one thing and do another thing. It's like they say the technology can't be stopped. Then at the same time, they say, "Let's sign a letter proposing that we stop it for six months." There's a wanting to have it both ways. I don't mean this as to say that they're being cynical. I think these are just two impulses when you look at a technology like this. One impulse is to say--
David: You're talking about people like Sam Altman to be clear?
Josh: Yes. One impulse is to say, "We can control this," and the other is to say, "What do we mean by we?" There's endless researchers around the world. This technology is available. It's visible. Hinton's view is it's alarming. It freaked me out personally. It also struck me as at least consistent. [laughs] I think if you look at the history of what happened with, say, nuclear weapons, we do now have a regulatory regime around them, but that took a while to build and bad things had to happen first.
David: You also have a Russian leader who, if he wants to, can threaten to use them in Ukraine, right?
Josh: We've never really solved that problem.
David: Do the benefits of AI outweigh the existential threat to humanity as Hinton himself has posted?
Josh: Before I wrote this piece, I think I had an idea of what AI was, which was that it was just statistics. It was just number-crunching. Learning more about the history of the field and learning more about the innovations that Hinton helped create and that other people helped create, I feel like this technology is incredible. I guess my overall feeling is people are really smart. If we can build it, we can hopefully control it in some way. I left the piece even more impressed by what AI can do, by what it really does. I think it's worth it.
[music]
Josh: I think some critics of AI or skeptics of AI feel that the endeavor is somehow anti-human like saying that what the AI does and what we do is the same, for example, is to diminish what the mind does, the human mind.
Geoffrey: Right. If you want to be mystical about it and think that humans have some mystical special property that a machine could never have, obviously, it does diminish that. I don't believe humans have mystical properties that machines couldn't have. I believe we're just wonderful and very complicated machines. I shouldn't even use the word "just." We're wonderful and very complicated machines.
Trying to understand how these machines work gives us much more insight into what we are. It tells us about our true nature. I think this is giving us enormous insight into the kinds of machines we are and it's clearly a huge revolution. The Industrial Revolution was when we could replace physical labor with machine labor. Now, we can replace intellectual labor with machine labor. It's a revolution of at least the same scale.
[music]
Josh: Do you think we'll have to think differently about what's valuable about ourselves and what makes us unique?
Geoffrey: That question has an assumption in it, which is what makes us unique. Maybe we're not.
[music]
David: You can read Joshua Rothman's profile of Geoffrey Hinton in The New Yorker and you can find my earlier conversation with Sam Altman of the company that runs ChatGPT at newyorkerradio.org.
[music]
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.