I, Robot
Brooke Gladstone: Artificial intelligence is back in the headlines because it seems to be getting so much smarter.
Nitasha Tiku: I found myself forgetting that it was a chatbot generator. It referenced this feeling it gets in the pit of its stomach. It referenced its mother.
Reporter 1: A digital game designer won first place at the Colorado State Fair Fine Arts competition after submitting a painting created by an AI computer program.
Kevin Roose: I realized that I was having the most sophisticated conversation about the nature of sentience that I had ever had, and I was having it with a computer program.
Brooke Gladstone: All of these very malevolent depictions of robotics and artificial intelligence influenced how people felt about AI.
Matt Devost: What if the AI makes better decisions, safer decisions than human beings? Do we abdicate that responsibility? Do we lose that agency?
Brooke Gladstone: From ChatGPT and AI art, to neural nets and information war, artificial intelligence in 2023. It's all coming up after this.
From WNYC in New York, this is On the Media. I'm Brooke Gladstone. If 2023 thus far had a person of the year, it might well be AI. That is if it were conscious, an ongoing debate in some circles. Certainly, the issue has sparked endless coverage, much of it framed along the lines of that old National Lampoon joke as in; AI, threat or menace?
Reporter 2: Microsoft has added new AI features to its Bing search engine, and journalists are getting a taste of its incredible and creepy capabilities.
Kevin Roose: It kept telling me that it was in love with me and trying to get me to say that I loved it back.
Reporter 3: Recent analysis from investment firm Goldman Sachs looked at the global impact and found AI could replace 300 million full-time jobs.
Reporter 4: A batch of images surfaced online, showing the former president being taken into custody, police custody there. Although the pictures look pretty convincing, they were all fake, created by artificial intelligence.
Brooke Gladstone: This wave of AI anxiety and enthusiasm was first set in motion when ChatGPT by OpenAI was unveiled last November. Rather than holding it close for testing like some of the other big players, OpenAI made its chatbot available to the public, reaping the benefits of buzz and beta testing and oceans of ready money.
Reporter 5: Microsoft, meanwhile, reportedly investing a whopping $10 billion in students' favorite homework-killer, ChatGPT.
Reporter 6: OpenAI, which is reportedly valued at nearly $30 billion, and back in December, it said it's on pace to generate $200 million this year.
Brooke Gladstone: But the buzz stoked fears and at the end of March, there was that open letter.
Reporter 7: It's been signed by more than 1,000 artificial intelligence experts and tech leaders past and present.
Reporter 8: Experts are calling for a six-month pause in developing large-scale AI systems, citing fears of profound risks to humanity.
Brooke Gladstone: The very tech execs who'd been building and profiting off of AI issued warnings about its power and the danger that could come with it. Apple co-founder, Steve Wozniak, chimed in.
Steve Wozniak: AI is another more powerful tool, and it's going to be used by those people for really evil purposes.
Brooke Gladstone: As did Microsoft founder, Bill Gates.
Bill Gates: We're all scared that a bad guy could grab it.
Brooke Gladstone: Which is how, in May, OpenAI CEO, Sam Altman, ended up testifying in front of Congress, where he basically said, regulate me.
Sam Altman: My worst fears are that we, the field, the technology, the industry, cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.
Reporter 9: If you're Sam Altman, and you get a whole press cycle that says, "First of all, my technology is so powerful that it could destroy the world, and second of all, I'm here to help. Regulate me, and I'll do whatever I can to prevent that from happening," that's kind of a hero pose for him.
Brooke Gladstone: I spoke to Washington Post reporter, Will Oremus, about the hearings.
Will Oremus: I would just say to consumers of the news, be wary of the hero narrative. Be wary of the idea that this guy who's building the leading AI systems is also the guy to save us from them.
Brooke Gladstone: What is it about ChatGPT that ignited a global frenzy? Well, it's so convincing. Bots like ChatGPT and Bard are built and trained differently from earlier clumsier iterations. Remember the Spike Jonze movie Her, "So human you could fall in love."
[Her clip]
Theodore Twombly: What are you doing?
Samantha: I'm just looking at the world and writing a new piano piece.
Theodore Twombly: Oh, yes? Can I hear it?
[end of Her clip]
Brooke Gladstone: These people-pleasing applications can be whatever you want them to be. You could even ask it for directions on how to remove a sandwich from a VCR in the style of the King James Bible.
ChatGPT: Oh, Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge? And the Lord spoke unto him, saying, "Fear not, my child, for I shall guide thy hand and show thee the way. Take thy butter knife and carefully insert it between the sandwich and the VCR and gently pry them apart."
Tina Tallon: I mean, listen, thou shalt not put the peanut butter sandwich in there in the first place. [chuckles]
Brooke Gladstone: Tina Tallon is Assistant Professor of AI and the Arts at the University of Florida. She gave us a brief history of the seasonal nature of AI love and loathing over the past 70 years
Tina Tallon: In the 1950s, there was a lot of energy behind it. However, those strides were cut short by the fact that they needed lots of data to analyze in terms of being able to move past these rule-based systems. Unfortunately, data wasn't cheap, so around the 1970s, we get this first AI winter.
Brooke Gladstone: The freeze on AI research thawed in the '80s when computer power boomed, but in the late '80s and into the '90s, another cold front blew in.
Tina Tallon: People, again, reached a wall in terms of the way that our computational resources were able to render all of these different cognitive processes. Then there also has been a lot of public opinion that has influenced the progression of AI research.
Brooke Gladstone: Consider blockbusters like 2001: A Space Odyssey back in 1968.
[2001: A Space Odyssey clip]
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
[end of 2001: A Space Odyssey clip]
Tina Tallon: Also, things like Robocop.
[Robocop clip]
Dick Jones: The Enforcement Droid series 209 is a self-sufficient law enforcement robot. 209 is currently programmed for urban pacification, but that is only the beginning. After a successful tour of duty in Old Detroit, we can expect 209 to become the hot military product for the next decade.
[end of Robocop clip]
Tina Tallon: Terminator.
[The Terminator clip]
Kyle Reese: It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
[end of The Terminator clip]
Tina Tallon: All of these very malevolent depictions of robotics and artificial intelligence influenced how people felt about AI.
Brooke Gladstone: When I spoke to Tallon in January when this show first aired, she said, "It's not just about chat."
Reporter 1: A digital game designer won first place at the Colorado State Fair Fine Arts competition after submitting a painting created by an AI computer program.
Brooke Gladstone: Via a newfangled AI-driven text-to-image generator.
Reporter 10: This is the first year it has been won by our robot overlords. Actual artists who got beat out are not happy.
Brooke Gladstone: Many of the AI tools initially available to the public hailed not from traditional tech giants, but from newer companies, labs and models like Prisma Labs, and Stable Diffusion, Midjourney, and the aforementioned OpenAI, which counts Elon Musk, Sam Altman, and Peter Thiel among its founders and funders. The big players were quick to get back in the game. Google released Bard, its own chatbot powered by LaMDA, back in February, and the money followed.
Reporter 11: The Alphabet guys, Larry Page and Sergey Brin, having one of their best years ever, up nearly $30 billion. It's the big getting even bigger from the AI craze.
Brooke Gladstone: Nitasha Tiku had her own experience with the LaMDA bot.
Nitasha Tiku: I found myself kind of forgetting that it was a chatbot generator.
Brooke Gladstone: She is a tech culture writer at The Washington Post. In her encounter with LaMDA, she experienced some serious uncanny valley heebie-jeebies.
Nitasha Tiku: It referenced this feeling it gets in the pit of its stomach. It referenced its mother. [chuckles] These bizarre backstories. I've felt like, okay, I'm a reporter trying to get a good quote from a source.
Brooke Gladstone: She also messed around with the groundbreaking text-to-image generator, DALL-E 2. What did she ask for?
Nitasha Tiku: Zaha Hadid designing a hobbit house. I did like a missing scene from Dune 2. I tried to generate fake images of family escaping the floods in Pakistan. I tried to do Black Lives Matters protesters storming the gates of the White House.
Brooke Gladstone: When we spoke earlier this year, she told me that this revolutionary tech has actually been around for a while.
Nitasha Tiku: They're already being used by major tech companies like Google and Facebook when it comes to auto-complete in your emails, language translation, machine translation, content moderation. You really wouldn't know that it's happening. It's much more at that infrastructure layer. Again, that's why people freaked out getting to play around with this technology. This stuff is being compared to the steam engine or electricity.
Brooke Gladstone: Really?
Nitasha Tiku: Yes.
Brooke Gladstone: Tell me more about that.
Nitasha Tiku: The belief that it will be this foundational layer to the next phase of the internet. You could read that in a more mundane way and just imagine it as DALL-E being incorporated into the next Microsoft Office. Everyone having access to these generative tools, so that you or I could make a multimedia video and generate a screenplay just as easily as we might be able to use a Word processor or Clip Art.
Brooke Gladstone: Right now this technology is out there, like any beta model, so that the public can test it. Then how they monetize or if they monetize it later remains to be seen.
Nitasha Tiku: Yes. Part of the reason we're seeing OpenAI get a lot of press is because the larger tech companies like Google and Facebook, they're just so adverse to bad PR that they either are not releasing similar technology that they have, or when they release it and bad things happen, they take it down immediately. Facebook released a model called Galactica, and it started generating a fake scientific paper with a real science author.
Brooke Gladstone: Using a real scientist's name, you mean?
Nitasha Tiku: Yes. That's not something Facebook wants to be in the news for. OpenAI has a different philosophy around that. They say that you need to have this real-world interaction in order to really be able to prepare.
Brooke Gladstone: How prepared are we to interact with these future tools?
Nitasha Tiku: I would say not at all, [chuckles] but I don't think that we couldn't get up to speed really quickly. I think that there are a lot of lessons that we've already learned from social media. It's certainly the media's job to educate the public about that, and I feel like we're up against a lot of hype by people with a financial stake in this technology. It's not taking away from the technology to acknowledge its limitations. AI literacy should be a focus for this year. It's really alarming to see people speculate that ChatGPT is great for therapy and mental health. That to me seems just like a wild leap.
Brooke Gladstone: Because the stakes are too high.
Nitasha Tiku: This is why regulations are in place. For the instances when it might work really well for 95% of the people, those 5% where it could be disastrous are protected. My percentages aren't correct, but therapy is definitely one of those instances. Maybe you want advice on how to talk to your boss, that's great, but mental health is serious.
Brooke Gladstone: Yes. I felt that in a lot of the hype about it, there wasn't much said about how its goal of being more human has made it much more likely to lie. The reason why I bring this up is because it's often been talked about as a threat to Google because it's so much easier to ask natural speech questions and get answers back, but from any of these advanced chatbots, there isn't any propensity towards telling the truth, is there?
Nitasha Tiku: Well, that depends on what it's optimized for. I think there's obvious reasons why Google, which has already been working on this and has for years been thinking about reorienting its search to a chat-like interface hasn't done it yet. That's not to say that there aren't many instances where it could be a lot more useful when you have the little answer box that pops to the top of Google, which often also gives you wrong answers, but there's so few questions in life where; A, not knowing the source; and B, just getting one answer is going to be sufficient. The companies could do both. They could cite their sources and give you more than one, but this is just going to complicate our existing information dystopia.
Brooke Gladstone: You mean make it worse?
Nitasha Tiku: Yes. [laughter] I think it's just good for people to keep in mind that these models are, above all, designed to sound plausible.
Brooke Gladstone: Plausibly human, you mean?
Nitasha Tiku: Just plausible. Like if you are asking for an answer, there's really no warning light that goes off when something is really wrong. There's no warning light that goes off if it generated a list of fake books as opposed to real books you should read, or if it is basically copying an artist's style versus giving you a really original image. It's designed to people-please and look and sound like what you asked of it. Just keep that in mind. It's really good at bullshitting you.
[laughter]
Brooke Gladstone: Nitasha, thank you so much.
Nitasha Tiku: Thanks for having me.
Brooke Gladstone: Nitasha Tiku, reports on tech for The Washington Post. Coming up, the unpopular idea that revolutionized AI. This is On the Media.
This is On the Media. I'm Brooke Gladstone. If you show a three-year-old child a picture and ask them what's in it, you'll get pretty good answers.
Child 1: Okay, that's a cat sitting in a bed.
Child 2: The boy is petting their elephant.
Child 3: Those are people, they're going on an airplane.
Child 4: That's a big airplane.
Brooke Gladstone: Those are clips from a 2015 TED Talk by Fei-Fei Li, a Computer Science Professor at Stanford University. She was consumed by the fact that despite all of our technological advances, our fanciest gizmos can't make sense of what they see.
Fei-Fei Li: Our most advanced machines and computers still struggle at this task.
Brooke Gladstone: In 2010, she started a major computer vision competition called the ImageNet Challenge, where software programs compete to correctly classify and detect objects and scenes. Contestants submit AI models that have been trained on millions of images organized into thousands of categories. Then the model is given images it's never seen before and asked to classify them. In 2012, a pair of doctoral students named Alex Krizhevsky and Ilya Sutskever entered the competition with a neural network architecture called AlexNet, and the results were astounding.
Geoffrey Hinton: They did much better than the existing technology, and that made a huge impact.
Brooke Gladstone: Geoffrey Hinton was their PhD advisor at the University of Toronto and collaborator on AlexNet. When we spoke in January, Geoff was still working at Google, but in May, he publicly left the company, specifically so that he could--
Geoffrey Hinton: Blow the whistle and say, we should worry seriously about how we stop these things getting control over us. It's going to be very hard, but for the existential threat of AI taking over, we're all in the same boat. It's like nuclear weapons. If there's a nuclear war, we all lose.
Brooke Gladstone: The warnings hit differently coming from Geoffrey Hinton because he's had a hand in pushing AI along as an explorer and developer of AI technology, especially the architecture of neural networks, since the '70s. It actually began when a high school friend started talking to him about holograms and the brain.
Geoffrey Hinton: Holograms had just come out, and he was interested in the idea that memories are distributed over the whole brain, so your memory of a particular event involves neurons in all sorts of different parts of the brain. That got me interested in how memory works.
Brooke Gladstone: Hologram, meaning a picture or a more, for lack of a better word, holistic way of storing information as opposed to just words. Is that what you mean?
Geoffrey Hinton: No. Actually, a hologram is a holistic way of storing an image as opposed to storing it pixel by pixel. When you store it pixel by pixel, each little bit of the image is stored in one pixel. When you store it in a hologram, every little bit of the hologram stores the whole image. You can take a hologram and cut it in half and you still get the whole image, it's just a bit fuzzier. It just seemed like a much more interesting idea than something like a filing cabinet, which was the normal analogy, where the memory of each event is stored as a separate file in the filing cabinet.
Brooke Gladstone: There was somebody named Karl Lashley, you said, who took out bits of rats' brains and found that the rats still remembered things.
Geoffrey Hinton: Yes. Basically, what he showed was that the memory for how to do something isn't stored in any particular part of the brain. It's stored in many different parts of the brain. In fact, the idea that, for example, an individual brain cell might store a memory doesn't make a lot of sense because your brain cells keep dying and each time a brain cell dies, you don't lose one memory.
Brooke Gladstone: This notion of memory, this holographic idea, was very much in opposition to conventional symbolic AI, which was all the rage in the last century.
Geoffrey Hinton: Yes, you can draw a contrast between two completely different models of intelligence. In the symbolic AI model, the idea is you store a bunch of facts as symbolic expressions, a bit like English but cleaned up so it's not ambiguous. You also store a bunch of rules that allow you to operate on those facts, and then you can infer things by applying the rules to the known facts to get new known facts. It's based on logic, how reasoning works, and then they take reasoning as to be the core of intelligence.
There's a completely different way of doing business, which is much more biological, which is to say we don't store symbolic expressions. We have great big patterns of activity in the brain, and these great big patterns of activity, which I call vectors, these vectors interact with one another and these are much more like holograms.
Brooke Gladstone: You've got these vectors of neural activity.
Geoffrey Hinton: For example, large language models that lead to big chatbots are all the rage nowadays. If you ask, how do they represent words or word fragments? What they do is they convert a symbol that says it's this particular word into a big vector of activity that captures lots of information about the word. They'd convert the word cat into a big vector, which is sometimes called an embedding, that is a much better representation of cat than just a symbol. All the similarities of things are conveyed by these embedding vectors. Very different from a symbol system. The only property a symbol has is that you can tell whether two symbols are the same or different.
Brooke Gladstone: I'm thinking of Moravec's Paradox, which I understand is the observation by AI and robotics researchers that reasoning actually requires very little computation, but a lot of sensorimotor and perception skills. He wrote in '88, "It's comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility."
I just wonder, do you think machines can ever think until they can get sensorimotor information built into those systems?
Geoffrey Hinton: There's two sides to this question: a philosophical side and a practical side. Philosophically, I think, yes, machines could think without any sensorimotor experience, but in practice, it's much easier to build an intelligent system if it has sensory input. There's all sorts of things you learn from sensory input, but the big language models that lead to these chatbots, many of them just have language as their input.
One thing you said at the beginning of this question was that reasoning is easy and perception is hard. I'm paraphrasing. That was true when you used symbolic AI, when you try to do everything by having explicit facts and rules to manipulate them. Perception turned out to be much harder than people thought it would be. As soon as you have big neural networks that learn and learn these big vectors, it turns out one kind of reasoning is particularly easy, and it's the kind that people do all the time and is most natural for people, and that's analogical reasoning.
Brooke Gladstone: Analogical reasoning, one thing is like another.
Geoffrey Hinton: Yes, we're very good at making analogies.
Brooke Gladstone: You went on to study psychology, and your career in tech, in which you are responsible for something that amounts to a revolution in AI, was an accidental spinoff of psychology. You went on to get a PhD in AI in the '70s at the oldest AI research center in the UK that was the University of Edinburgh. You were in a place where everyone thought that what you were doing, studying memory as multiple stable states in a system, wouldn't work. That, in fact, what you were doing, studying neural networks, was resolutely anti-AI. You weren't a popular guy, I guess.
Geoffrey Hinton: That's right. Back then, neural nets and AI were seen as opposing camps. It wasn't until neural nets became much more successful than symbolic AI that all the symbolic AI people started using the term AI to refer to neural nets so they could get funding.
Brooke Gladstone: When explaining the difference for a non-technical person between what a neural network is and why it was revolutionary compared to symbolic AI, a lot of it hinges around what you think a thought is.
Geoffrey Hinton: I recently listened to a podcast where Chomsky repeated his standard view that thought and language are very close. Whatever thought is, it's quite similar to language. I think that's complete nonsense. I think Chomsky has misunderstood how we use words. If we were two computers and we had the same model of the world, then it would be very useful, one computer telling the other computer which neurons were active, and that would convey from one computer to another what the first computer was thinking. All we can do is produce sound waves or written words or gestures. That's the main way we convey what we're thinking to other people. A string of words isn't what we're thinking. A string of words is a way of conveying what we're thinking. It's the best way we have because we can't directly show them our brain states.
Brooke Gladstone: I once had a teacher who said, if you can't put it into words, then you don't really understand it.
Geoffrey Hinton: I think there were all sorts of things you can't put into words that your teacher didn't understand.
Brooke Gladstone: [laughs] The only place words exist is in sound waves and on pages.
Geoffrey Hinton: The words are not what you operate on in your head to do thinking. It's these big vectors of activity. The words are just pointers to these big vectors of activity. They're the way in which we share knowledge. It's not actually a very efficient way to share knowledge, but it's the best we've got.
Brooke Gladstone: Today you're considered a kind of Godfather of AI. There's a joke that everyone in the field has no more than six degrees of separation from you. You went on to become a professor at the Computer Science Department at the University of Toronto, which helped turn Toronto into a tech hub. Your former students and postdoctoral fellows include people who are today now leading the field. What's it like being called the godfather of a field that rejected you for the majority of your career?
Geoffrey Hinton: It's pleasing.
Brooke Gladstone: And now all the big companies are using neural nets.
Geoffrey Hinton: Yes.
Brooke Gladstone: How do you define thinking, and do you think machines can do it? Is there a point in comparing AI to human intelligence?
Geoffrey Hinton: Well, a long time ago, Alan Turing, I think he got fed up with people telling him machines couldn't possibly think because they weren't human, and defined what's called the Turing Test. Back then, you had teletypes, and you would type to the computer the question and it would answer the question. This was a sort of thought experiment. If you couldn't tell the difference between whether a person was answering the question or whether the computer was answering the question, then Alan Turing said you better believe the computer is intelligent.
Brooke Gladstone: I admire Alan Turing, but I never bought that. I don't think it proves anything. Do you buy the Turing Test?
Geoffrey Hinton: Basically, yes. There's problems with it, but it's basically correct. The problem is, suppose someone is just adamantly determined to say machines can't be intelligent. How do you argue with them? Because nothing you present to them satisfies them that machines are intelligent.
Brooke Gladstone: I don't agree with that either. I could be convinced if machines had the hologram-like web of experience to draw from, the physical as well as the mental and computational.
Geoffrey Hinton: The neural nets are very holistic. Let me give you an example from ChatGPT. There's probably better examples from some of the big Google models, but ChatGPT is better publicized. You ask ChatGPT to describe losing one's sock in the dryer in the style of the Declaration of Independence. It ends up by saying that all socks are endowed with certain inalienable rights by their manufacturer. Now why did it say, manufacturer? Well, it understood enough to know that socks are not created by God. They're created by manufacturers. If you're saying something about socks, but in the style of the Declaration of Independence, the equivalent of God is the manufacturer. It understood all that because it has sensible vectors that represent socks and manufacturers and God and creation.
That's an example of a kind of holistic understanding, an understanding via analogies that's much more human-like than symbolic AI and that is being exhibited by ChatGPT.
Brooke Gladstone: And that in your view is tantamount to thinking, it is thinking.
Geoffrey Hinton: That's intuitive thinking. What neural nets are good at is intuitive thinking. The big chatbots aren't so good at explicit reasoning, but then nor are people. People are pretty bad at explicit reasoning.
Brooke Gladstone: We don't have identical brains. Our brains run at low power, about 30 Watts, and they're analog. We're not as good at sharing information as computers are.
Geoffrey Hinton: You can run 10,000 copies of a neural net on 10,000 different computers, and they can all share their connection strengths because they all work exactly the same way, and they can share what they learned by sharing their weights, their connection strengths. Two computers that are sharing a trillion weights is an immense bandwidth of information between the two computers. Whereas two people who are just using language have a very limited bottleneck.
Brooke Gladstone: Computers are telepathic.
Geoffrey Hinton: It's as if computers are telepathic, right.
Brooke Gladstone: Were you excited when ChatGPT was released? We've been told, it isn't really a huge advancement. It's just out there for the public.
Geoffrey Hinton: In terms of its abilities, it's not significantly different from a number of other things already developed, but it made a big impact because they did a very good job of engineering it so it was easy to use.
Brooke Gladstone: Are there potential implementations of AI that concern you?
Geoffrey Hinton: People using AI for autonomous lethal weapons. The problem is that a lot of the funding for developing AI is by governments who would like to replace soldiers with autonomous lethal weapons, so the funding is explicitly for hurting people. That concerns me a lot.
Brooke Gladstone: That's a pretty clear one. Is there something subtler about potential applications that give you pause?
Geoffrey Hinton: I'm hesitant to make predictions beyond about five years. It's obvious that this technology is going to lead to lots of wonderful new things. As one example, AlphaFold, which predicts the 3D shape of protein molecules from the sequence of bases that define the molecule, that's extremely useful and is going to have a huge effect in medicine. There's going to be a lot of applications like that. They're going to get much better at predicting the weather. Not beyond 20 days or so, but predicting the weather in like 10 days' time, I think these big AI systems are already getting good at that. There's just going to be huge numbers of applications.
In a sensible society, this would all be good. It's not clear that everything's going to be good in the society we have.
Brooke Gladstone: What about the singularity, the idea that what it means to be human could be transformed by a breakthrough in artificial intelligence or a merging of human and artificial intelligence into a kind of transcendent form?
Geoffrey Hinton: I think it's quite likely we'll get some kind of symbiosis. AI will make us far more competent. I also think that the stuff that's already happened with neural nets is changing our view of what we are. It's changing people's view from the idea that the essence of a person is a deliberate reasoning machine that can explain why it arrive to conclusions. The essence is much more a huge analogy machine that's forever making analogies between a gazillion different things to arrive at intuitive conclusions very rapidly. That seems far more like our real nature than reasoning machines.
Brooke Gladstone: Have you ever had a flight of fancy of what this ultimately might mean in how we live?
Geoffrey Hinton: That's beyond five years.
Brooke Gladstone: You're right. I see [crosstalk]--
Geoffrey Hinton: I have no idea.
Brooke Gladstone: You warned me. Geoffrey, thank you very much.
Geoffrey Hinton: Okay.
Brooke Gladstone: Geoffrey Hinton is a former engineering fellow at Google Brain. He resigned in May, and has been voicing his concerns about the impending AI arms race and the lack of protection ever since. Coming up, with great computer power comes great responsibility. This is On the Media.
This is On the Media. I'm Brooke Gladstone. Toward the end of my conversation with Geoff Hinton, he touched on a couple of things that need a little more explaining. One of them is AlphaFold.
Geoffrey Hinton: Which predicts the 3D shape of protein molecules from the sequence of bases that define the molecule.
Brooke Gladstone: An important development because protein misfolding is known to contribute to the pathogenesis of diseases like Alzheimer's. AlphaFold is an AI system developed by DeepMind, a subsidiary of Alphabet.
Speaker 1: Now a couple of days ago, DeepMind has announced that its second iteration of the AlphaFold system has "solved the 50-year-old grand challenge problem of protein folding."
Brooke Gladstone: There are other labs working on this software too. This is University of Washington, Seattle Biochemist, David Baker.
David Baker: We've also designed new proteins to break down gluten in your stomach for celiac disease and other proteins to stimulate your immune system to fight cancer. These advances are the beginning of the protein design revolution.
Brooke Gladstone: Hinton also described his fear of autonomous lethal weapons powered by AI. I followed up on that with Matt Devost, an international cybersecurity expert who started his career hacking into systems for the US Department of Defense back in the 1990s. When I first spoke to him in January, he gave me the beginner's class on autonomous lethality.
Matt Devost: Where once a target has been designated by a human decision-maker, the weapon will have autonomy to operate and get there. It'll navigate the terrain properly, make decisions based on how it achieves the impact of that target, for example.
Brooke Gladstone: There isn't a kid back in Oklahoma running it on a board. It can make a decision and change its path based on its own information.
Matt Devost: And probably much more quickly than a human drone operator would be able to achieve. Now that doesn't mean that we're going to take humans out of the decision-making equation with regards to what gets targeted.
Brooke Gladstone: Not yet, anyway. [chuckles]
Matt Devost: Not yet, but in how it achieves the mission and the ability to basically act in a swarm capacity and make decisions amongst themselves, adjusting their mission profile based on the swarm intelligence.
Brooke Gladstone: Yes. That's when multiple weapons are simultaneously operating and communicating with each other-
Matt Devost: With each other.
Brooke Gladstone: -making decisions based on each other's behavior. That's drone technology, but how would the next generation of swarming weapons behave?
Matt Devost: What gets really interesting is if they start to demonstrate an ability to operate in a way that is more humane or cognizant of the human impact than a human decision-maker would be able to do. In which case, now you start to have some autonomy with regards to the targeting itself.
Brooke Gladstone: Can you give me an example of that?
Matt Devost: Trying to target this facility, but we're trying to minimize the potential for collateral damage, and the drone is aware enough to know that a bus just pulled up next to the facility, where there is an autonomy that is built into the weapons that allows them to make a decision or abort a decision or delay a decision based on a situation that even a human being doesn't have the capacity to make that decision because it's changing so rapidly.
Brooke Gladstone: Right now, we wouldn't allow weapons to autonomously target, but that could happen one day, and it brings up images of Dr. Strangelove and Fail Safe.
Matt Devost: That is going to be a concern. I think we've articulated pretty clearly, at least at the US government level, that humans will remain in the loop as it relates to targeting other humans. It's different if you're targeting drones, or you're targeting the communications tower, et cetera. We could reach a point in which the drones are more efficient and more humane decision-makers based on the AI capabilities and analytics that they're able to achieve, the same way that we might someday decide that we should allow only self-driving cars. Humans do a really good job of killing a lot of ourselves in motor vehicles every year. There may be a point in time in which the AI is a more sensible and objective decision-maker.
Brooke Gladstone: Obviously, these new AI tools will have an impact on intelligence gathering and collection, and you say that for you, ChatGPT was a wow moment.
Matt Devost: It was for a couple of reasons. One is, it interacts with you based on questions, and you're able to refine it like the same way that you could refine your conversation with a human being. "Tell me more," or make a counterargument. It also does a great job of understanding nuanced concepts.
I gave an example. A friend of mine, Bill Kroll, who used to be Deputy Director of the National Security Agency, had a quote a few years ago where he said, "The cybersecurity industry has a thousand points of light but no illumination." I asked ChatGPT, "What do you think Bill meant when he said that?" It gave an incredible answer. It said, "When someone says that the cybersecurity industry has a thousand points of light and no illumination, they are expressing frustration with the fragmented and disorganized nature of the industry. The term 'a thousand points of light' refers to many different players and stakeholders, including government agencies, private companies and individual security experts. Each of these players brings their own unique perspective and expertise to the field, but the lack of coordination and collaboration among them make it difficult to develop a comprehensive and effective approach to cybersecurity."
Brooke Gladstone: Holy cow.
Matt Devost: That is an incredible response, right? You can tell ChatGPT, "I want you to give a ranking or rating about how confident you are in your analysis. I also want you to provide a counterpoint. Plus, I want you to provide recommendations as to what we can do about this." If you go in and ask it, "What is the probability that Iran will attack a US bank with a cyber weapon?" It gives you a response that flows almost exactly like you would see in an intelligence briefing that might be delivered all the way up to the president's daily briefing.
It's fascinating that it is able to not only query all this knowledge and come up with these great responses, but it can also frame the response from the perspective of the audience expectations.
Brooke Gladstone: But it has been shown over and over again that ChatGPT is fundamentally a people-pleaser.
Matt Devost: Yes.
Brooke Gladstone: It doesn't care if it's true or not.
Matt Devost: Yes.
Brooke Gladstone: It will invent sources in order to give you something that has the exact format you're asking for. You can't trust anything that ChatGPT says, so how can it be helpful in intelligence gathering?
Matt Devost: Yes. The Intelligence community won't use ChatGPT based on ChatGPT's existing training dataset. It'll use it based on data sets that are proprietary to the Intelligence community. What we're about to see in the next year and in the coming years is these domain-specific versions of ChatGPT where I control the training data, or I tell it that it doesn't have to be the human-pleaser. It doesn't have to be conversational. It should use the same heuristics that it's using to derive these answers, but if you don't have a source, you don't invent it. You can't make judgments that aren't based on a particular source. It's a very quick shift to move away from that inherent bias to using the capability in a way that's very meaningful.
Brooke Gladstone: Give me an example. Would it interrogate a prisoner of war?
Matt Devost: I don't know that it would interrogate a prisoner of war. Although, you could certainly envision where it might be used to augment a human's questions that they're asking. I think it'll probably get really good at threat assessment, making recommendations for remediating vulnerabilities. I think analysts might also use it to help them through their thinking. They might come up with an assessment and say, "Tell me how I'm wrong," and the AI serves as almost the Tenth Man Rule, if you will, where they're by design taking the counterargument. I think there'll be a lot of unique ways in which the technology is used in the Intelligence community.
Brooke Gladstone: How imminent is this kind of technology?
Matt Devost: It's incredibly imminent. The technology clearly exists. We're going to see, with version 4.0, a version that is much more constrained with regards to not making things up and is much more current. One of the existing flaws right now with ChatGPT is the training data ends in 2021. If you now start to have it where there's training data current as of whatever it found in the models this morning, that starts to get very, very interesting and means that this technology can be applied around real-time issues in the next year or two years.
Brooke Gladstone: Another wow moment you had was a challenge several years ago by DARPA, that is the government agency that drives a lot of amazing technology. It gave us the internet, for one thing, and GPS. Tell me about what happened at that DARPA conference.
Matt Devost: Yes, so that was fascinating for me. In cybersecurity, we have these contests that we call Capture the Flag contests, and they really are ways for people to compete to demonstrate who's the top hacker, who's the top person at attacking systems. You hack systems and you take control of them, and then you have to defend the flag. You have to make sure that you patch it and you fix it and you prevent other people from taking over that system and booting you off.
Brooke Gladstone: This is a cyber war game, essentially.
Matt Devost: This is a cyber war game, yes. In 2016, they brought the finalists out to DEFCON, which is the largest hacker conference in the world, in Las Vegas, and they had the six finalists compete. That was another aha moment for me, where I felt like I was living in the future, similar to the way I felt when I encountered ChatGPT at the beginning of December.
I started my career in 1995. It was my job for the Department of Defense to break into systems and show how they were vulnerable and help system owners patch those systems, and here I was being completely replaced by a machine, and the machines were very creative and fast. That's an uncomfortable feeling [chuckles] for somebody in the cybersecurity industry, not because of the displacement, but because of the lack of explainability or the lack of understanding with regards to how resilient the patching is, or making sure that the AI doesn't lose control of its objectives and do something that ends up being malicious behavior. It's definitely a Brave New World in that regard.
Brooke Gladstone: How do we ensure that these weapons are safe to deploy? How do we ensure that they don't commit war crimes?
Matt Devost: Yes. I think we'll have clearly defined ethics around the use of artificial intelligence as it relates to things that could impact human lives or human safety. What's going to be disconcerting is when we encounter adversaries that don't have the same ethics, and do we end up having to unleash some sort of autonomy in our weapons because our adversaries have launched autonomous weapons against us? Put in a position of having to violate some of our principles because it's the only way to appropriately defend ourselves.
If we dig a little deeper though, there are some other core risks. These technologies all run on systems that are vulnerable, so we have an underlying responsibility to make sure that the infrastructure is robust and is secure. You also need to make sure where the training data has an open collection model, ChatGPT draws intelligence from the internet itself, that you are aware of adversaries that might try and pollute that environment. What if I decide that putting blog posts up, writing websites, taking out advertisements, going on Twitter to pursue a particular narrative that will influence the decision-making of a particular AI?
Then the third area is going to be around the robustness of the algorithms and making sure that we have removed bias. I think that will drive, in the Department of Defense, a requirement for what we call explainable AI. The AI has to describe to us in understandable terms how it arrived at that decision.
Brooke Gladstone: The debate over the drones was that Americans wouldn't be killed if we used them. Critics say, we've overused them because the cost to us is so low. We've already been able to destroy the world many times over for 70 years, but the ability to be more surgical in our destruction and even to hand off our own autonomy to machines that may well be smarter than we are is a terrifying prospect.
Matt Devost: It is, right? We need to figure out what levels of agency we want to retain. As it relates to warfighting, we've said, well, we want to maintain the decision-making as it relates to other human beings, but what if, over and over again, AI makes better decisions, safer decisions than human beings? Do we abdicate that responsibility? Do I lose the agency of being able to interpret what is misinformation with my own brain, or do I abdicate it to an AI system that does it for me? That is definitely going to be one of the fundamental questions that we face over the next decade; where do we retain agency, and where do we decide that the machines can do it better?
Brooke Gladstone: You seem to be suggesting that it may turn out that humans are far more dangerous.
Matt Devost: In some domains, the humans might be more dangerous.
Brooke Gladstone: I'm thinking of the Cuban Missile Crisis, and how the tape suggests that John Kennedy was pretty much alone in wanting to make that deal to take American missiles out of Turkey so that Khrushchev would take them out of Cuba. I'm just wondering if there had been an advanced chatbot advisor in the room, whether he would've stood with Kennedy or not.
Matt Devost: Yes, it makes you definitely consider what does the training data look like for a decision like that. I don't want us to think that I'm a fan of abdicating control to the machines. I'm certainly not. We have to figure out which are fundamentally human decisions and which are the ones that can be automated or augmented.
Brooke Gladstone: It depends what you think of human nature, right? If there is a machine that is developed to help us fight the best war, is there a possibility that that machine may say, best not go to war?
Matt Devost: As long as we get it to understand our objectives and our constraints. You could sit and say, "Would the world be a better place right now if Russia were run by some sort of autonomous AI?" Possibly, but if the AI has been programmed with the same biases, the same tendencies, the same ambitions, it might be more efficient than Putin in perpetrating these atrocities.
Brooke Gladstone: Matt, thank you very much.
Matt Devost: Yes, of course. It was my pleasure. I enjoyed the conversation.
Brooke Gladstone: Matt Devost is the CEO and Co-Founder of the global strategy advisory firm, OODA, spelled O-O-D-A, LLC.
That's the show this week. On the Media is produced by Micah Loewinger, Eloise Blondiau, Molly Schwartz, Rebecca Clark-Callender, Candice Wang, and Suzanne Gaber, with help from Shaan Merchant, our technical directors, Jennifer Munson. Our engineer this week was Andrew Nerviano. Katya Rogers is our executive producer. On the Media is a production of WNYC Studios. I'm Brooke Gladstone.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.