Unpacking the Hype About the Thrills and Dangers of AI
Brooke Gladstone: Artificial intelligence is back in the headlines because it seems to be getting so much smarter.
Nitasha Tiku: I found myself forgetting that it was a chatbot generator. It referenced this feeling it gets in the pit of its stomach. It referenced its mother.
Reporter 1: A digital game designer won first place at the Colorado State Fair Fine Arts competition after submitting a painting created by an AI computer program.
Kevin Roose: I realized that I was having the most sophisticated conversation about the nature of sentience that I had ever had, and I was having it with a computer program.
Brooke Gladstone: All of these very malevolent depictions of robotics and artificial intelligence influenced how people felt about AI.
Matt Devost: What if the AI makes better decisions, safer decisions than human beings? Do we abdicate that responsibility? Do we lose that agency?
Brooke Gladstone: From ChatGPT and AI art, to neural nets and information war, artificial intelligence in 2023. It's all coming up after this.
From WNYC in New York, this is On the Media. I'm Brooke Gladstone. If 2023 thus far had a person of the year, it might well be AI. That is if it were conscious, an ongoing debate in some circles. Certainly, the issue has sparked endless coverage, much of it framed along the lines of that old National Lampoon joke as in; AI, threat or menace?
Reporter 2: Microsoft has added new AI features to its Bing search engine, and journalists are getting a taste of its incredible and creepy capabilities.
Kevin Roose: It kept telling me that it was in love with me and trying to get me to say that I loved it back.
Reporter 3: Recent analysis from investment firm Goldman Sachs looked at the global impact and found AI could replace 300 million full-time jobs.
Reporter 4: A batch of images surfaced online, showing the former president being taken into custody, police custody there. Although the pictures look pretty convincing, they were all fake, created by artificial intelligence.
Brooke Gladstone: This wave of AI anxiety and enthusiasm was first set in motion when ChatGPT by OpenAI was unveiled last November. Rather than holding it close for testing like some of the other big players, OpenAI made its chatbot available to the public, reaping the benefits of buzz and beta testing and oceans of ready money.
Reporter 5: Microsoft, meanwhile, reportedly investing a whopping $10 billion in students' favorite homework-killer, ChatGPT.
Reporter 6: OpenAI, which is reportedly valued at nearly $30 billion, and back in December, it said it's on pace to generate $200 million this year.
Brooke Gladstone: But the buzz stoked fears and at the end of March, there was that open letter.
Reporter 7: It's been signed by more than 1,000 artificial intelligence experts and tech leaders past and present.
Reporter 8: Experts are calling for a six-month pause in developing large-scale AI systems, citing fears of profound risks to humanity.
Brooke Gladstone: The very tech execs who'd been building and profiting off of AI issued warnings about its power and the danger that could come with it. Apple co-founder, Steve Wozniak, chimed in.
Steve Wozniak: AI is another more powerful tool, and it's going to be used by those people for really evil purposes.
Brooke Gladstone: As did Microsoft founder, Bill Gates.
Bill Gates: We're all scared that a bad guy could grab it.
Brooke Gladstone: Which is how, in May, OpenAI CEO, Sam Altman, ended up testifying in front of Congress, where he basically said, regulate me.
Sam Altman: My worst fears are that we, the field, the technology, the industry, cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.
Reporter 9: If you're Sam Altman, and you get a whole press cycle that says, "First of all, my technology is so powerful that it could destroy the world, and second of all, I'm here to help. Regulate me, and I'll do whatever I can to prevent that from happening," that's kind of a hero pose for him.
Brooke Gladstone: I spoke to Washington Post reporter, Will Oremus, about the hearings.
Will Oremus: I would just say to consumers of the news, be wary of the hero narrative. Be wary of the idea that this guy who's building the leading AI systems is also the guy to save us from them.
Brooke Gladstone: What is it about ChatGPT that ignited a global frenzy? Well, it's so convincing. Bots like ChatGPT and Bard are built and trained differently from earlier clumsier iterations. Remember the Spike Jonze movie Her, "So human you could fall in love."
[Her clip]
Theodore Twombly: What are you doing?
Samantha: I'm just looking at the world and writing a new piano piece.
Theodore Twombly: Oh, yes? Can I hear it?
[end of Her clip]
Brooke Gladstone: These people-pleasing applications can be whatever you want them to be. You could even ask it for directions on how to remove a sandwich from a VCR in the style of the King James Bible.
ChatGPT: Oh, Lord, how can I remove this sandwich from my VCR, for it is stuck fast and will not budge? And the Lord spoke unto him, saying, "Fear not, my child, for I shall guide thy hand and show thee the way. Take thy butter knife and carefully insert it between the sandwich and the VCR and gently pry them apart."
Tina Tallon: I mean, listen, thou shalt not put the peanut butter sandwich in there in the first place. [chuckles]
Brooke Gladstone: Tina Tallon is Assistant Professor of AI and the Arts at the University of Florida. She gave us a brief history of the seasonal nature of AI love and loathing over the past 70 years
Tina Tallon: In the 1950s, there was a lot of energy behind it. However, those strides were cut short by the fact that they needed lots of data to analyze in terms of being able to move past these rule-based systems. Unfortunately, data wasn't cheap, so around the 1970s, we get this first AI winter.
Brooke Gladstone: The freeze on AI research thawed in the '80s when computer power boomed, but in the late '80s and into the '90s, another cold front blew in.
Tina Tallon: People, again, reached a wall in terms of the way that our computational resources were able to render all of these different cognitive processes. Then there also has been a lot of public opinion that has influenced the progression of AI research.
Brooke Gladstone: Consider blockbusters like 2001: A Space Odyssey back in 1968.
[2001: A Space Odyssey clip]
Dave Bowman: Open the pod bay doors, HAL.
HAL: I'm sorry, Dave. I'm afraid I can't do that.
Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don't know what you're talking about, HAL.
HAL: I know that you and Frank were planning to disconnect me, and I'm afraid that's something I cannot allow to happen.
[end of 2001: A Space Odyssey clip]
Tina Tallon: Also, things like Robocop.
[Robocop clip]
Dick Jones: The Enforcement Droid series 209 is a self-sufficient law enforcement robot. 209 is currently programmed for urban pacification, but that is only the beginning. After a successful tour of duty in Old Detroit, we can expect 209 to become the hot military product for the next decade.
[end of Robocop clip]
Tina Tallon: Terminator.
[The Terminator clip]
Kyle Reese: It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
[end of The Terminator clip]
Tina Tallon: All of these very malevolent depictions of robotics and artificial intelligence influenced how people felt about AI.
Brooke Gladstone: When I spoke to Tallon in January when this show first aired, she said, "It's not just about chat."
Reporter 1: A digital game designer won first place at the Colorado State Fair Fine Arts competition after submitting a painting created by an AI computer program.
Brooke Gladstone: Via a newfangled AI-driven text-to-image generator.
Reporter 10: This is the first year it has been won by our robot overlords. Actual artists who got beat out are not happy.
Brooke Gladstone: Many of the AI tools initially available to the public hailed not from traditional tech giants, but from newer companies, labs and models like Prisma Labs, and Stable Diffusion, Midjourney, and the aforementioned OpenAI, which counts Elon Musk, Sam Altman, and Peter Thiel among its founders and funders. The big players were quick to get back in the game. Google released Bard, its own chatbot powered by LaMDA, back in February, and the money followed.
Reporter 11: The Alphabet guys, Larry Page and Sergey Brin, having one of their best years ever, up nearly $30 billion. It's the big getting even bigger from the AI craze.
Brooke Gladstone: Nitasha Tiku had her own experience with the LaMDA bot.
Nitasha Tiku: I found myself kind of forgetting that it was a chatbot generator.
Brooke Gladstone: She is a tech culture writer at The Washington Post. In her encounter with LaMDA, she experienced some serious uncanny valley heebie-jeebies.
Nitasha Tiku: It referenced this feeling it gets in the pit of its stomach. It referenced its mother. [chuckles] These bizarre backstories. I've felt like, okay, I'm a reporter trying to get a good quote from a source.
Brooke Gladstone: She also messed around with the groundbreaking text-to-image generator, DALL-E 2. What did she ask for?
Nitasha Tiku: Zaha Hadid designing a hobbit house. I did like a missing scene from Dune 2. I tried to generate fake images of family escaping the floods in Pakistan. I tried to do Black Lives Matters protesters storming the gates of the White House.
Brooke Gladstone: When we spoke earlier this year, she told me that this revolutionary tech has actually been around for a while.
Nitasha Tiku: They're already being used by major tech companies like Google and Facebook when it comes to auto-complete in your emails, language translation, machine translation, content moderation. You really wouldn't know that it's happening. It's much more at that infrastructure layer. Again, that's why people freaked out getting to play around with this technology. This stuff is being compared to the steam engine or electricity.
Brooke Gladstone: Really?
Nitasha Tiku: Yes.
Brooke Gladstone: Tell me more about that.
Nitasha Tiku: The belief that it will be this foundational layer to the next phase of the internet. You could read that in a more mundane way and just imagine it as DALL-E being incorporated into the next Microsoft Office. Everyone having access to these generative tools, so that you or I could make a multimedia video and generate a screenplay just as easily as we might be able to use a Word processor or Clip Art.
Brooke Gladstone: Right now this technology is out there, like any beta model, so that the public can test it. Then how they monetize or if they monetize it later remains to be seen.
Nitasha Tiku: Yes. Part of the reason we're seeing OpenAI get a lot of press is because the larger tech companies like Google and Facebook, they're just so adverse to bad PR that they either are not releasing similar technology that they have, or when they release it and bad things happen, they take it down immediately. Facebook released a model called Galactica, and it started generating a fake scientific paper with a real science author.
Brooke Gladstone: Using a real scientist's name, you mean?
Nitasha Tiku: Yes. That's not something Facebook wants to be in the news for. OpenAI has a different philosophy around that. They say that you need to have this real-world interaction in order to really be able to prepare.
Brooke Gladstone: How prepared are we to interact with these future tools?
Nitasha Tiku: I would say not at all, [chuckles] but I don't think that we couldn't get up to speed really quickly. I think that there are a lot of lessons that we've already learned from social media. It's certainly the media's job to educate the public about that, and I feel like we're up against a lot of hype by people with a financial stake in this technology. It's not taking away from the technology to acknowledge its limitations. AI literacy should be a focus for this year. It's really alarming to see people speculate that ChatGPT is great for therapy and mental health. That to me seems just like a wild leap.
Brooke Gladstone: Because the stakes are too high.
Nitasha Tiku: This is why regulations are in place. For the instances when it might work really well for 95% of the people, those 5% where it could be disastrous are protected. My percentages aren't correct, but therapy is definitely one of those instances. Maybe you want advice on how to talk to your boss, that's great, but mental health is serious.
Brooke Gladstone: Yes. I felt that in a lot of the hype about it, there wasn't much said about how its goal of being more human has made it much more likely to lie. The reason why I bring this up is because it's often been talked about as a threat to Google because it's so much easier to ask natural speech questions and get answers back, but from any of these advanced chatbots, there isn't any propensity towards telling the truth, is there?
Nitasha Tiku: Well, that depends on what it's optimized for. I think there's obvious reasons why Google, which has already been working on this and has for years been thinking about reorienting its search to a chat-like interface hasn't done it yet. That's not to say that there aren't many instances where it could be a lot more useful when you have the little answer box that pops to the top of Google, which often also gives you wrong answers, but there's so few questions in life where; A, not knowing the source; and B, just getting one answer is going to be sufficient. The companies could do both. They could cite their sources and give you more than one, but this is just going to complicate our existing information dystopia.
Brooke Gladstone: You mean make it worse?
Nitasha Tiku: Yes. [laughter] I think it's just good for people to keep in mind that these models are, above all, designed to sound plausible.
Brooke Gladstone: Plausibly human, you mean?
Nitasha Tiku: Just plausible. Like if you are asking for an answer, there's really no warning light that goes off when something is really wrong. There's no warning light that goes off if it generated a list of fake books as opposed to real books you should read, or if it is basically copying an artist's style versus giving you a really original image. It's designed to people-please and look and sound like what you asked of it. Just keep that in mind. It's really good at bullshitting you.
[laughter]
Brooke Gladstone: Nitasha, thank you so much.
Nitasha Tiku: Thanks for having me.
Brooke Gladstone: Nitasha Tiku, reports on tech for The Washington Post. Coming up, the unpopular idea that revolutionized AI. This is On the Media.