How Tech Journalists Are Fueling the AI Hype Machine
Micah Loewinger: Hey. It's Micah. This is the On the Media midweek podcast. Happy Memorial Day. Hope you did something fun or relaxing. Hope you got to be outside a little bit. We found sometimes that over holiday weekends, our listens dip a little bit. You might have missed a piece I did on the most recent show about lazy tech journalism and how reporters just time and time again fall for whatever Silicon Valley is hocking. They did it with the gig economy, and now they're doing it with artificial intelligence. We were pretty proud of the piece, and so we're going to rerun it for the pod extra. Enjoy.
Last week OpenAI released a demo of its latest technology. It's text-based software ChatGPT-4o, which responds to prompts and now has a new voice, a few actually, but this one called Sky got the most attention.
Sky: You've got me on the edge of my-- Well, I don't really have a seat, but you get the idea. What's the big news?
Micah Loewinger: People online said the demo reminded them of a 2013 film about a man who falls in love with his AI voice assistant, performed by Scarlett Johansson.
AI voice assistant: Good morning, Theodore.
Theodore: Good morning.
AI voice assistant: You have a meeting in five minutes. Do you want to try getting out of bed? Get out of--
Theodore: You're too funny.
Micah Loewinger: Within hours of the demo's release, OpenAI CEO Sam Altman tweeted the word "Her," the name of that very film, which by the way, he has publicly described as an inspiration for his work. Then, days later--
News clip: The actress said she turned down the offer to be the voice of the artificial intelligence system, and that they made one that sounded just like her.
Micah Loewinger: Johansson said Altman approached her eight months ago, and she turned down his offer to lend her likeness to the software. He approached her again just two days before the release of the demo.
News clip: She said, "I was shocked, angered, and in disbelief, that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference."
Micah Loewinger: In response to requests from Johansson's lawyer, OpenAI has said they're discontinuing the voice they called Sky. The company maintains they hired a voice actor for the job before approaching Johansson, and made no attempt to emulate the actor. The debacle emphasized how these large language models often rely on human labor and data, often taken without permission. Despite its problems, so many AI boosters in Silicon Valley and members of the press say that artificial intelligence holds the keys to a shining future.
News clip: We may look on our time as the moment civilization was transformed, as it was by fire, agriculture and electricity.
Sam Harnett: Oh man, when the AI coverage started, I thought, "Here we go again. This is the same old story."
Micah Loewinger: Sam Harnett is the author of a 2020 paper titled, Words Matter: How Tech Media Helped Write Gig Companies into Existence.
Sam Harnett: I wrote it because I was really disappointed with the coverage I was seeing, and some of the coverage I ended up doing.
Micah Loewinger: Today, Sam hosts a podcast called Ways of Knowing, but back in 2015, he was a tech reporter for KQED in San Francisco, filing stories for marketplace and NPR.
Sam Harnett: I was a young reporter. You got to do these quick stories, and before you know it, you're using all these words, like startup, or tech, or platform. I started thinking, "These words themselves are misleading." Rideshare for an Uber, what are you sharing? You're paying someone to drive you around. You're not sharing anything.
Micah Loewinger: These euphemisms were pushed by the tech industry and quickly adopted by the press during the early days of the gig economy. In his papers, Sam listed off several styles of media tropes that defined that era. Like the First Person Review, he points to a Time Magazine cover story titled, Baby, You Can Drive My Car, and Do My Errands, and Rent My Stuff.
Sam Harnett: Those experiential First Person stories, they're not critical at all. It's all about, how you're engaging with this thing and what it's like. Even when they are critical, you're still giving them a lot of free advertising by casting it as a totally new thing.
Micah Loewinger: Yes, but on the consumer side, you could see where your car was before it got to you. You could see who the driver was. You could know how much it was going to cost. You didn't have to give cash to a stranger in a car. That's innovation. No?
Sam Harnett: Well, then when you look at Uber and Lyft, they're using GPS and phones. GPS had been around for decades, phones were relatively new, but Uber and Lyft didn't invent the phones. Really, the innovation seemed to be ignoring local transportation laws and ignoring labor laws, and it was all being cast as techno-utopianism, this inevitable future of work.
News clip: It's a mass transit revolution, sparked by the universal ride-sharing company that goes by only a block letter U on its windshield. Of course, we're talking about Uber.
News clip: I hope that all regulators will take the time to understand that most of these drivers greatly value the freedom and flexibility to be able to work whenever and wherever they want.
News clip: The industry wants those drivers to stay independent contractors. That's cheaper for those companies. It's also at the core of their business.
News clip: What Uber does, this is the future, it is the sharing economy, the marketplace will win, but we've got to support them--
Sam Harnett: Really, it was the passive work. I think it was talking to a lot of taxi drivers and realizing that this is work that has no social safety net. This is work that has no overtime. There's no guaranteed minimum wage. Work that's undoing protections that were hard fought 100 years ago.
Micah Loewinger: Meanwhile, some outlets focused on what Sam Harnett calls the outlier worker profile. CNBC wrote about 38-year-old David Feldman, who, "Quit the rat race and left his finance job to make six figures picking up gigs on Fiverr," a site that connects customers with freelancers. The Washington Post ran a story titled Uber's Remarkable growth could end the era of poorly paid cab drivers, which cited these claims from the company.
News clip: The people that drive their taxis barely break even, whereas someone who drives an Uber can make a net $90,000 a year.
News clip: The medium pay for Uber drivers in New York City, $90,000 a year for a 40-hour workweek.
News clip:: Wow. That is the same as a post-secondary science teacher and a financial analyst here. That's a lot of money.
Micah Loewinger: Claims that landed Uber in court.
News clip: The Federal Trade Commission will send nearly $20 million in checks to Uber drivers. This is all part of a settlement with a ride hailing company. The FTC found Uber exaggerated the yearly and hourly income drivers that they could make in certain cities.
Micah Loewinger: Instead of pressing Silicon Valley executives on how these companies were, say, misleading workers, many journalists did uncritical interviews.
Guy Raz: They were threatening to sue you. Right?
John Zimmer: They were threatening to shut us down.
Micah Loewinger: Host Guy Raz in 2018, interviewing Lyft's CEO John Zimmer for NPR's podcast, How I Built This.
John Zimmer: The opportunity was massive, and the regulatory obstacles were just as massive.
Guy Raz: How long did it take for you to overcome those initial regulatory challenges? Was it months, some years?
John Zimmer: I'd say at least a year, probably for that first year.
Sam Harnett: They cast the people behind these companies as heroes who overcome adversity-
Micah Loewinger: Sam Harnett.
Sam Harnett: -who create a thing that the listener wants to succeed. It's astonishing how the tech industry keeps finding ways to get lots of media coverage that ends up turning into lots of investment and lots of power. Speed is imperative, and if they can get up and running quickly enough, and if their business model can become a thing that is regularly used by consumers and embedded in society, then they become too big to regulate.
Paris Marx: I think we see it with a lot of new technologies, whether it's the gig economy, whether it was with crypto a few years ago, whether it's AI.
Micah Loewinger: Paris Marx is the host of a podcast called Tech Won't Save Us, and the writer behind the Disconnect newsletter.
Paris Marx: We often see these very rapid embraces of whatever the next new thing from the tech industry is, and less of a desire to really question the promises that the companies are making about them.
Micah Loewinger: Marx agrees that some of the same media tropes that Sam Harnett identified are recurring now with AI, like the First Person Review.
Paris Marx: After ChatGPT was released in November of 2022, the companies were selling that we were potentially even closer to computers matching human level intelligence. One of the things that we saw a lot of media organizations doing was actually going on to ChatGPT and having conversations with it.
There's a really striking example of this that was published in The New York Times by Kevin Roose, their tech journalist. He basically had this two-hour conversation with this chatbot, which he said wanted to be called Sydney. It had its own name. It was telling him that it wanted to be alive. It was ultimately asking Roose to leave his wife and have a relationship with the chatbot. The way that it was written, it was ascribing intentionality to this chatbot. It was thinking it was having these responses, it was feeling certain things, when actually we know that these chatbots are not doing anything of the sort.
The science fiction author Ted Chiang, basically called these chatbots autocomplete on steroids. We're used to using autocomplete on our phones when we're texting people. It's suggesting the next word, and this is just taking it to a new level.
Micah Loewinger: The fact that a nascent chatbot with millions of dollars of funding behind it would say such outrageous things, is that not, in and of itself, newsworthy even if the chatbots own claims about its human like intelligence were just outright wrong?
Paris Marx: I think it definitely can be, but then the question is, how do you frame it and how do you explain it to the public? This was February of 2023. ChatGPT was released at the end of November of 2022. We were still really early in the public's getting to know what this technology was. It really misleads people as to what is going on there.
Micah Loewinger: Another trope that Harnett lays out in his paper is his discussion of the founder interview. Today, we've seen so many fawning conversations with tech leaders who are at the forefront of artificial intelligence.
Paris Marx: Absolutely. One of the ones that really stands out, of course, is an interview that Sundar Pichai, the CEO of Google, did with 60 Minutes back in April of 2023. In this interview, Sundar was talking about how these AIs were a black box and we don't know what goes on in there.
Sundar Pichai: Let me put it this way, I don't think we fully understand how a human mind works either.
Paris Marx: One of the biggest problems there was not just what Sundar Pichai was saying, but that the hosts of the program who were interviewing him and conducting this, were not really pushing back on any of these narratives that he was putting out there.
Scott Pelley: Of the AI issues we talked about, the most mysterious is called emergent properties.
Micah Loewinger: Scott Pelley of 60 Minutes.
Scott Pelley: Some AI systems are teaching themselves skills that they weren't expected to have. For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know.
Micah Loewinger: After the piece came out, AI researcher Margaret Mitchell, who previously co-led Google's AI ethics team, posted on X saying that according to Google's own public documents, the chatbot had actually been trained on Bengali texts. Meaning this was not evidence of emergent properties. Here's another exaggeration that made its way into a TV news piece.
News clip: The latest version, ChatGPT-4, can even pass the bar exam with a score in the top 10%, and it can do it all in just seconds.
Micah Loewinger: ChatGPT-4 scored in the 90th percentile on the bar exam. Was that legit?
Julia Angwin: Yes. That claim was debunked recently.
Micah Loewinger: Julia Angwin is the founder of Proof News. She recently wrote an op-ed for the New York Times titled Press Pause on the Silicon Valley Hype Machine.
Julia Angwin: An MIT researcher basically reran the test and found that it actually scored in the 48th percentile. The difference was that when you're talking about percentiles, you have to say who are the other people in that cohort that you're comparing with? Apparently, OpenAI was comparing to a cohort of people who had previously failed the exam multiple times. [laughs]
Micah Loewinger: OpenAI compared its product to a group that took the bar in February. They tend to fail more than people who take it in July.
Julia Angwin: When you put it compared to a cohort of people who had passed it at the regular rate, then you got to this 48th percentile. The problem is that paper comes out, it's peer reviewed and it goes through the academic process. It comes out a year later than the claim.
Micah Loewinger: Tell me about Devin. This is a red hot product from a new startup that claims to be an AI software engineer. Can it do what its creators claim it can do?
Julia Angwin: Yes. Devin is from this company called Cognition, which raised about $21 million from investors and came out with what they called an AI software engineer that they said could do programming tasks on its own. The public couldn't really get access to Devin, so there wasn't anything to go on except these videos of Devin supposedly completing tasks.
News clip: I'm going to ask Devin to benchmark the performance of Llama and a couple of different API providers. From now on, Devin is in the driver's seat.
Julia Angwin: The press wrote about it as if it was totally real. WIRED did a-- forget chatbots, AI agents are the future with the headline. Bloomberg did a breathless article about how these programmers are basically writing code that would destroy their own jobs. There was a software developer named Carl Brown who decided to actually test the claims.
Carl Brown: I have been a software professional for 35 years.
Micah Loewinger: Here's Carl Brown on his YouTube channel, Internet of Bugs.
Carl Brown: For the record, personally, I think generative AI is cool. I use GitHub Copilot on a regular basis. I use ChatGPT, Llama 2, Stable Diffusion. All that kind of stuff is cool, but lying about what these tools can do, does everyone a disservice.
Julia Angwin: He took one of these videos where Devin was aiming to complete the task and he tried to replicate exactly what was happening. He did the task in 36 minutes, and the timestamps in the video show that it took Devin more than six hours to do the task. What Carl says is that.
Carl Brown: Devin is generating its own errors, and then debugging and fixing the errors that it made itself.
Julia Angwin: The company basically acknowledged it actually in tweets. They didn't respond to my inquiries, but they basically said, "Yes, we're still trying to make it better." It was one of these things where it was a classic example of journalists shouldn't believe just a video that claims to show something happening without actually taking a minute to even carefully watch the video or ask to have access to the tool themselves.
Micah Loewinger: If I started a company and raised millions of dollars in funding, I would be under a lot of pressure to prove to the public that it works, and you'd think that people who cover Silicon Valley understand that dynamic.
Julia Angwin: Totally. I will tell you that after my piece ran in the New York Times questioning whether we should believe all this AI hype, a reporter at WIRED did an entire piece, basically, trashing my piece. The title of it was, We Should Believe the AI Hype.
Micah Loewinger: Really?
Julia Angwin: Yes.
Micah Loewinger: What was their argument?
Julia Angwin: Basically, that in the future I will be proven wrong because it will all get better. That's the company's argument too, which is like, "Don't believe your lying eyes, believe the future that I'm holding up in front of you." I think for journalists, I don't think our role is to call the future. I think our role is to assess the present and the recent past.
Micah Loewinger: The recent past tells us that Big Tech is very good at generating hype in the press and using venture capital to grow really fast and influence regulators. I'm not predicting this will happen with AI. It's already happening.
Sam Altman: My worst fears are that we cause significant-- we, the field, the technology, the industry, cause significant harm to the world.
Micah Loewinger: Here's Sam Altman, CEO of OpenAI, testifying before Congress last May and discussing why he thinks AI needs to be regulated.
Sam Altman: I think if this technology goes wrong, it can go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening.
Micah Loewinger: Just a month later, Time magazine revealed that OpenAI had secretly lobbied the EU to go easy on the company when regulators were drafting what's now the largest set of AI guardrails.
Paris Marx: Because he is treated as the high priest of this AI moment, because he had these compelling narratives that were being backed up by a lot of reporting.
Micah Loewinger: Paris Marx.
Paris Marx: He was basically able to convince European Union officials to reduce the regulations on his company and his types of products specifically, and that carried through to when the AI act was finally passed.
Micah Loewinger: All this while technology companies pushed the public along a path that they and members of the press say is inevitable.
Paris Marx: We know that generative AI, the ChatGPTs, the image generator, things like that are much more computationally intensive than the types of tools that we were using previously. They require a lot more computing power. As a result of that, Amazon and Microsoft and Google are in the process of doing a major buildout of large hyperscale data centers around the world in order to power what they hope will be this major demand for these generative AI tools into the future. That obviously requires a lot of energy and a lot of water to power it.
Sam Altman: I think we have paths now to a massive energy transition away from burning carbon.
Paris Marx: In this interview in January with Bloomberg, Altman actually directly engaged with that when he was asked about it.
News clip: Does this frighten you guys? Because the world hasn't been that versatile when it comes to supply, but AI, as you have pointed out, it's not going to take its time until we start generating enough power.
Sam Altman: It motivates us to go invest more in fusion and invest more in new storage.
Paris Marx: He said that we're actually going to need an energy breakthrough in nuclear technologies in order to power the vision of AI that he has. He didn't hesitate and say, "If we don't arrive at it, then maybe we won't be able to roll out this vision of AI that I hope to see," but rather that we're just going to have to power it with other energy sources. Those often being fossil energy sources, and that would require us to geo-engineer the planet in order to keep it cooler than it would otherwise be because of all the emissions that we're creating.
Julia Angwin: The existential question I have about AI is, is it worth it?
Micah Loewinger: Julia Angwin.
Julia Angwin: Is it worth having something that maybe sorts data better, writes an email for you at the cost of our extremely precious energy? Then also, AI is based on scooping up all this data from the public internet without consent.
Micah Loewinger: As Sam Harnett said, speed is imperative. It's why Big Tech is pushing some half-baked AI features. As of last week, when you type a question into Google, you now see an AI-generated answer. Some people reported that the AI told them to eat rocks and put glue on pizza, which weren't presented as jokes, even though the info appears to have been scraped from Reddit and The Onion.
Julia Angwin: There's this AI pioneer, Yann LeCun, who works at Meta. He's their leading AI scientist, and he recently tweeted out something I thought was so perfect. He said, it will take years for AI to get as smart as cats. [laughter] I thought that's perfect. I should have just run that instead of my column.
[music]
Micah Loewinger: Here's one last issue. When Google AI summarizes legit info from real news sites, there's no need to go to the original source, meaning even less traffic for ailing media organizations. This is yet another reason members of the press should refrain from Silicon Valley Boosterism. Janky new tools may be eating our lunch, but if the recipe was made by AI, we should probably wait to dig in.
Thanks for listening to the podcast Extra. On the big show this week we'll be discussing the rise of Donald Trump's social media platform, Truth Social. Look out for the show on Friday. You don't want to miss it. Thanks for listening.
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.