How the AI Senate Hearing Missed the Mark
From WNYC in New York, this is On the Media. I'm Brooke Gladstone. On Tuesday, Sam Altman, the CEO of OpenAI, testified in front of the Senate Judiciary Committee about the dangers of artificial intelligence. The hearing opened with remarks from Senator Richard Blumenthal.
[music]
Senator Richard Blumenthal: We have seen how algorithmic biases can perpetuate discrimination and prejudice and how the lack of transparency can undermine public trust. This is not the future we want.
Brooke Gladstone: But wait!
Senator Richard Blumenthal: If you were listening from home, you might have thought that voice was mine, but in fact, that voice was not mine. The audio was an AI voice cloning software trained on my floor speeches. The remarks were written by ChatGPT when it was asked how I would open this hearing.
Brooke Gladstone: The stunt was somewhat underwhelming, but sure, point made about how good AI has gotten and the implications about where it might go. At the end of March, there was the open letter.
News clip: It's been signed by more than a thousand artificial intelligence experts and tech leaders past and present.
News clip: The artificial intelligence experts are calling for a six-month pause in developing large-scale AI systems, citing fears of profound risks to humanity.
Brooke Gladstone: Then almost three weeks ago, Geoffrey Hinton, the so-called "Godfather of AI," who we'd interviewed on the show, left his job at Google specifically so that he could-
Geoffrey Hinton: -blow the whistle and say we should worry seriously about how we stop these things getting control over us. It's going to be very hard and I don't have the solutions. I wish I did.
Brooke Gladstone: Apple co-founder Steve Wozniak also chimed in.
Steve Wozniak: It's going to be used by people for basically really evil purposes.
Brooke Gladstone: As did Microsoft founder Bill Gates.
Bill Gates: We're all scared that a bad guy could grab it.
Brooke Gladstone: OpenAI's Sam Altman basically said to Congress, "Regulate me."
Sam Altman: It is essential that powerful AI is developed with Democratic values in mind, and this means that US leadership is critical.
Brooke Gladstone: Will Oremus writes about technology in the digital world for The Washington Post. He says the vibe of Tuesday's session was worlds away from the ones where lawmakers rake social media execs over the coals.
Will Oremus: This was very different. This was more like some of those low-key hearings you don't end up reading much about in the news, where they have some independent expert witnesses that are there to really educate them about an issue. That's how they were treating Sam Altman, who is the CEO of OpenAI.
Brooke Gladstone: What was his take? What was his demeanor? Was he the Cassandra of coders?
Will Oremus: He was there to issue warnings about how powerful this technology could be.
Sam Altman: My worst fears are that we cause significant-- we, the field, the technology, the industry, caused significant harm to the world. I think if this technology goes wrong, it can go quite wrong. We want to be vocal about that. We want to work with the government to prevent that from happening.
Will Oremus: He was also there to present himself as an ally in making sure that the worst fears aren't realized, but here's the thing. Altman's the one who's building it.
Brooke Gladstone: Yes, I know. That is the thing. [laughs]
Will Oremus: No company has done more to push this particular form of AI. You can call it generative AI or large language models, foundational models, the stuff that underpins something like ChatGPT. They're the ones who released ChatGPT to the public and forced the hand of big companies like Google and Microsoft to respond with their own AI chatbots.
Brooke Gladstone: What's he playing at here? Senator John Neely Kennedy, the Republican of Louisiana, even asked if Altman himself might be the one to oversee a federal regulatory body overseeing AI.
Senator John Neely Kennedy: Would you be qualified if we promulgated those rules to administer those rules?
Sam Altman: I love my current job.
[laughter]
Senator John Neely Kennedy: Cool. Are there people out there that would be qualified?
Sam Altman: We'd be happy to send you recommendations for people out there, yes.
Brooke Gladstone: That was weird. No? Asking the fox to guard the henhouse.
Will Oremus: Yes, it raises the question of, is it still regulatory capture? If you don't have to capture anything, they just hand you the keys to the regulations.
Brooke Gladstone: What were the ideas that were proposed?
Will Oremus: Broadly, there are two ways of thinking about the threats posed by this generation of AI. One way of thinking about it is the way that was prevalent at this hearing. It's that speculative, far-off, what if AI gets too smart? How do we make sure that it doesn't go rogue and kill us all? That's sometimes called "AI safety." There is another framework sometimes called "AI ethics," that looks more at problems of, "How can AI tools be misused by humans or how could they deceive people?" This hearing really focused more on the speculative harms and less on the questions like, what if companies start delegating decision-making to AI today and the AI makes bad decisions at a huge scale?
We don't know why because it's a black box, because we don't know exactly how it works or what data it's been trained on. What if tons of people lose their jobs and then we realize it was all a big mistake, or we realize that they've been replaced by these machines that have embedded really insidious biases? Another way of thinking about those two sets of concerns is, on the one hand, you're concerned that AI is going to get too smart. On the other hand, you're concerned that AI today is too dumb, for lack of a better word, that people are going to overestimate its intelligence and use it for things it's not really cut out for.
Brooke Gladstone: Like what things?
Will Oremus: For instance, if you talk to a doctor about people using the internet for medical research, they'll laugh ruefully about Dr. Google.
Brooke Gladstone: The University of Google.
Will Oremus: Right, and it's not always the most reliable information. That said, when I google my symptoms, I know that there are certain sites that are maybe more reliable than others. I can go to those and I can take it with a grain of salt because I know whose site I'm on. Now, think about how Google and Microsoft want to build chatbots like ChatGPT into their search engines. In fact, they're already doing it. What about when people start asking medical questions to a chatbot? What data was that trained on? Was that trained on WebMD or was that trained on some conspiracy quack's blog? We don't know.
Brooke Gladstone: The ideas that came up for regulating this nascent industry included things like licensing AI models, scoring them on certain benchmarks, ensuring that the AI is identified as AI and can't pose as humans. Did you see anything in this that could address some of these short-term, immediate present concerns?
Will Oremus: There were a lot of ideas floated. Some of them, I think, do address some of the shorter-term issues. There were some calls for the AI companies to not train their models on the copyrighted works of artists without those artists' consent. Senator Marsha Blackburn of Tennessee, representing the Nashville country music industry, wanted to know, "Can you train your model on Garth Brooks's music and then have a program that can make songs that sound just like Garth Brooks but he doesn't get any royalties?"
Altman downplayed that concern, but what he didn't say was they've already done it. They've already trained their models on copyrighted works without consent. There is no unbaking that bread. It's in there. Even though the headlines from this hearing were, while Sam Altman is inviting regulation, there are certain types of regulation he was definitely not invited and that was one of them.
Another one was, one of the expert witnesses was Gary Marcus, a professor at NYU who's been a long-time observer, an expert on AI. He repeatedly said we need some transparency about what are the data sources for these models so that we can even begin to research what the biases might be. That is something that OpenAI does not support and that Altman sidestepped in the hearing. Then one other line of regulatory attack on these AI models would be around liability.
When a chatbot says something that turns out to lead to harm, maybe an AI could give bad medical or financial advice that leads to someone's ruin or even death, will the AI companies be held responsible for that? Altman doesn't think that the AI companies should be held liable if their models steer somebody wrong. Altman says, "Regulate me. I'm making something dangerous," but there were things that he doesn't want the government's help on, things that would be problematic for the way that OpenAI is doing business.
Brooke Gladstone: The headline from this hearing was that here's Sam Altman warning that AI could cause great harm to society. That's pretty catchy.
Will Oremus: I think it would be foolish to sit here and say six months after ChatGPT came out that there couldn't be serious harms from a super smart AI someday.
Brooke Gladstone: You have Stephen Hawking being terrified of it before he dies.
Will Oremus: Yes, but the more you focus on how smart AI could be someday, the less you focus on all the ways it falls short today. That's crucial because we're in a moment when almost every industry is looking for, how can we make use of AI? How can we show our investors that we're at the forefront? Could we weather this difficult financial period by laying off humans and putting AI in charge of some things?
We're already seeing media companies doing that. I've talked to BuzzFeed and I've talked to CNET and its parent Red Ventures and they say, "Well, yes, we're investing heavily in AI and we're going to let AI write articles now and, yes, also we disbanded BuzzFeed News and laid off all the reporters and we did layoffs at CNET of humans, but those two are entirely unrelated." Nobody's coming out and saying, "We're firing humans and replacing them with AI." If you connect the dots, it is already happening.
Brooke Gladstone: This hearing really is one of the best AI marketing campaigns ever.
Will Oremus: Right. If you're Sam Altman and you get a whole press cycle that says, "First of all, my technology is so powerful that it could destroy the world. Second of all, I'm here to help. Regulate me and I'll do whatever I can to prevent that from happening," that's a hero pose for him. It's worth noting that they did not include some of the people who first brought the warnings about large language models to the public's attention. A few years ago, there was a big hubbub around Google's ethical AI team. It stemmed from a research paper that members of that team had co-authored.
They warned of everything from the impacts on the climate, from building ever bigger models that require evermore computing power and energy to run. They warned about the biases built into the data that these models were being trained on. Google wouldn't let them publish that paper. Those people are still very active as critics of AI, but they were not invited to be part of this hearing. Instead, you got the industry folks who were asked to come in and inform lawmakers about what the harms might be.
Brooke Gladstone: What sort of lens should the concerned media consumer put on this story and the coverage of it?
Will Oremus: I would just say to consumers of the news, be wary of the hero narrative. Be wary of the idea that this guy who's building the leading AI systems is also the guy to save us from them. There are many other voices out there with different things to say than Altman about the risks posed by AI. I think it's really important that those voices get heard and listened to when they speak up. We need to be aware of its limitations in order to have any hope of using it for good.
Brooke Gladstone: Will, thank you very much.
Will Oremus: Thanks for having me.
Brooke Gladstone: Will Oremus writes about the ideas, products, and power struggles shaping the digital world for The Washington Post. Coming up, the writers are restless. This is On the Media.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.