Is That Legal? Plus, DeepSeek and the A.I. Bubble.

( Ben Curtis / AP Photo )
Steve Bannon: All we have to do is flood the zone. Bang, bang, bang. These guys will never be able to recover.
Brooke Gladstone: Steve Bannon said that back during the first Trump administration. It still holds true today. From WNYC New York, this is On the Media. Some of President Trump's recent executive orders have journalists asking a lot, "Is this legal?"
Dahlia Lithwick: Because the president is not a king, these are letters of intent, but they don't have any binding legal force, and if we treat them as though they do, they're paralyzing.
Brooke Gladstone: Plus, will China's DeepSeek oh so cheap and transparent pop America's AI bubble?
Ed Zitron: It's always been this idea that America has built these big, beautiful, large language models that require all of this money based on their narrative. There's not actually been much behind them.
Brooke Gladstone: It's all coming up after this.
[music]
Brooke Gladstone: From WNYC in New York, this is On the Media. Micah Loewinger is out this week. I'm Brooke Gladstone. Much news was made in week two of the new administration. Here's some of what happened in Washington alone. Tightly clustered hearings for some of the most dubious Cabinet nominees, the mass firing of inspectors general, an offer to two million federal workers to get paid through September if they quit now, a bold by which I mean unconstitutional bid to impound the money Congress has appropriated for pretty much all government programs, and the first fatal commercial airliner crash in the United States in 16 years.
Now, obviously, I'm not saying the President was behind that. It would be indecent to use tragedy to take crude political potshots, and even inferring something like that without a shred of evidence lacks all common sense.
President Donald Trump: Pete Buttigieg, a real winner. Do you know how badly everything has run since he's run the Department of Transportation?
Brooke Gladstone: In that moment of national sorrow, we saw the President address the nation's most pressing concern, diversity.
President Donald Trump: He’s a disaster. He was a disaster as a mayor. He ran his city into the ground, and he’s a disaster. Now he’s just got a good line of bulls–t. The Department of Transportation is government agency charged with regulating civil aviation. Well, he runs it, 45,000 people, and he’s run it right into the ground with his diversity.
Reporter: I'm trying to figure out how you can come to the conclusion right now that diversity had something to do with this crash?
President Donald Trump: Because I have common sense, okay? And unfortunately, a lot of people don't.
Karoline Leavitt: I commit to telling the truth from this podium every single day.
Brooke Gladstone: White House spokeswoman Karoline Leavitt.
Karoline Leavitt: And I will say it's very easy to speak truth from this podium when you have a President who is implementing policies that are wildly popular with the American people, and that's exactly what this administration is doing.
Brooke Gladstone: Okay. According to a recent Reuters Ipsos poll, 62% of Americans oppose pardoning the January 6 protesters, 59% oppose ending birthright citizenship, 59% also oppose ending federal efforts to hire women or racial minorities, 56% oppose withdrawing from the Paris climate accords, and 70% opposed changing the name of the Gulf of Mexico. A Wall Street Journal poll found that 70% of registered voters also oppose deporting undocumented immigrants who are longtime US residents with no criminal record. Meanwhile, this Trump White House has filled the ether with shiny objects at a level, as he likes to say, that nobody has ever seen.
Steve Bannon: I said, all we have to do is flood the zone. Every day we hit them with three things. They’ll bite on one, and we’ll get all of our stuff done, bang, bang, bang. These guys will never, will never be able to recover.
Brooke Gladstone: Steve Bannon, erstwhile Trump guru, now charged with money laundering and conspiracy, said that back during the first Trump administration. It still holds true today. But of all these things, what set off the greatest alarm so piercing the White House had to beat a hasty retreat and blame it all on the media was its effort to seize total control of the government's purse strings. Agencies were flummoxed. On Tuesday, Medicaid portals in all 50 states were down.
Steve Bannon: This memo that went out late Monday night that caused all sorts of chaos and confusion, ordering Federal departments and agencies to freeze what could have been hundreds of billions of dollars in federal aid funding. Now, the OMB has officially rescinded that Monday memo.
Brooke Gladstone: According to that first memo, "The use of Federal resources to advance Marxist equity, transgenderism, and green new deal social engineering policies is a waste of taxpayer dollars." The second memo clarified that the money for nutritional support, student loans, anything that "provides direct benefits to an American" would not be affected. On Wednesday, the White House just killed the original memo, but Spokeswoman Karoline Leavitt said on X that the executive orders on federal funding remain in effect and the ones halting the disbursement of funds from the Bipartisan Infrastructure Law and the Inflation Reduction Act, who knows?
Dahlia Lithwick is a senior editor at Slate, where she hosts the Amicus podcast on the courts and the law. Welcome back, Dahlia.
Dahlia Lithwick: Hello, Brooke.
Brooke Gladstone: So, so much smoke and mirrors. Apparently, the executive orders don't have the power to freeze federal funding, but the Monday memo from the Office of Management and Budget did, and when the OMB pulled the memo, it rescinded the funding freeze. So the executive orders are still in effect, but they're irrelevant? Help.
Dahlia Lithwick: I think I described it as Schrodinger's Constitution, where the law is alive and dead at the same time, and you have to be speculating what is happening in that box. The box being the law here. If you have the White House itself saying, okay, we've got an executive order, now we've got a memo that's describing the executive order, but the executive order has no force. But the memo has force. But now we're rescinding the memo, but we're actually only rescinding parts of the memo. There's other parts that are operative. And if you believe, and I think we have to believe, that the cornerstone of this kind of authoritarian overtaking of government and of law, then the absolute operative rule is always going to be sow confusion and doubt in institutions.
Brooke Gladstone: What are the president's legal arguments?
Dahlia Lithwick: The legal argument is I have the right to decide how money is spent. He really thinks that if he just does a "pause" on all Federal spending in order to realign the agencies with his political vision, then there's no harm, no foul, and that would be fine, Brooke, except that that is actually impoundment, and it's not lawful. His argument is that the Impoundment Control Act is just unconstitutional, so he's kind of doing the thing that he does a lot, which is, I just don't like X, so I'm gonna wish it away, and we're gonna use this as a vehicle to get it up to the court, to have the court essentially say that impoundment, that whole doctrine is wrong.
Brooke Gladstone: Which brings us very quickly to this Supreme Court, which obviously handed the President discretion to fire any subordinate who acts against him, immunity from prosecution when engaged in "official acts." Impoundment is an official act? No?
Dahlia Lithwick: Does Trump read that immunity opinion to say that he is, in fact, Henry VIII? Yes. Yes, he does. But I don't think that that's the opinion that John Roberts and Amy Coney Barrett wrote that he can do whatever he wants. There's a long line of case law that goes against what Trump is doing on impoundment. This is how Donald Trump has treated the law from the jump, Brooke, which is the law is what I say it is until somebody stops me.
Brooke Gladstone: Let's pivot to some of the other shiny objects that Trump has let fly in the form of executive orders and proclamations and memoranda. Last week, you noted on Amicus that executive orders don't change the law, that they should be treated "as letters to Santa".
Dahlia Lithwick: They are letters of intent, but they don't have any binding legal force, and if we treat them as though they do, they're paralyzing for all the reasons you said. There's a flurry of them. Many of them are just flagrantly unconstitutional. One of them purports to rewrite the 14th Amendment in the birthright citizenship context. You have to treat it as Trump said, that the 14th Amendment doesn't mean what it says it means, that's illegal, and that puts the onus on us to do something about it as opposed to make us feel passive and freaked out.
Brooke Gladstone: Let's run through a few of these orders. The one called Protecting the American People Against Invasion calls for the immediate removal of those in the US without legal status, not just criminals like he was saying, and it's led to ICE raids in Chicago, Miami, Newark, and other major hubs, and the arrest of, as we speak, about 4,500 people since last Thursday. Legal?
Dahlia Lithwick: A whole chunks of this, they're flagrantly illegal. The Executive order allows the President to deploy the armed forces, including the National Guard, and it orders the Defense and Homeland Security departments to build a wall. It suspends due process. It subjects people to deportation without an opportunity to contest it. One of the really frightening things is what's called Posse Comitatus. This is an 1878 act that prohibits the military from participating in arrests, searches, or seizures. This seems to flagrantly violate Posse Comitatus.
I think the really important thing that makes it unlawful is that the justification adopts this completely bonkers theory that asylum seekers and economic migrants are an invasion under Article 4 of the Constitution. That has been rejected every time it's been attempted, and now I think it comes back to the question you started with, which is, are the courts going to accept that this is an invasion, and if the courts do, then we are in a whole new terrifying world of immigration crackdown.
Brooke Gladstone: How about this one? It's called Reevaluating and Realigning United States Foreign Aid. It freezes foreign aid. It also cancels PEPFAR, which was George W. Bush's emergency plan for AIDS relief. Illegal?
Dahlia Lithwick: This has also had a really chaotic rollout, Brooke, where the Secretary of State carved out a bunch of exemptions, including military aid and food programs for Egypt and for Israel, and then this past Tuesday, they carved out more exemptions, issuing a waiver exempting "life-saving humanitarian assistance". Although that does not include, surprise, surprise, abortion or family planning or gender diversity, equity, and inclusion ideology programs. We should note, this is stopping aid to Ukraine. As to the question of whether this is lawful, this feels like it may be lawful or will be found to be lawful.
Brooke Gladstone: What about the order, prioritizing military excellence and readiness by barring all transgender non-binary people from serving in the military? The order asserts that those with "gender dysphoria" or with "shifting pronoun usage" or the use of pronouns that inaccurately reflect an individual sex are unfit to serve in the military. Now, this strikes me as something he can do.
Dahlia Lithwick: Interestingly, this one actually involves law. This is a little bit of a 2.0 because as you'll recall, in 2017, then President Trump announced a ban on trans service members in a tweet. That played out in the courts throughout most of the rest of the first Trump administration. Groups sued. Courts unanimously blocked the policy. The Supreme Court eventually allowed it to take effect while the cases were pending, and then it all went away because the Biden administration rescinded the whole policy in 2021.
This is one where it actually is, I think, illegal. It discriminates against these service members based on their sex, based on their transgender status. It violates the equal protection clause. The US Supreme Court, in a Bostock ruling penned by Neal Gorsuch, explicitly said that discrimination against transgender Americans is unconstitutional, impermissible discrimination on the basis of sex.
Brooke Gladstone: So we're contending with dozens of executive orders at the moment, but you think the one that should concern us the most, at least its legal implications, is the OMB freeze on funding that was rescinded, but still is going to be litigated?
Dahlia Lithwick: The OMB order essentially said not only is the president the king, and the other two branches are subordinate, but also that literally nothing that Congress has ever said or done or appropriated, nothing matters. The Congress barely needs to exist, and if the courts accede to that, you have just constructed a system of American monarchy. There's no other way to think about it, and that's not a constitutional crisis, it's a democratic crisis.
Brooke Gladstone: But it lands once again into the lap of the Supreme Court, and as you've pointed out, Trump has one of the losingest records of any administration in history for the Roberts Court. His lawyers did shoddy work for the Census case in 2020 for the first travel ban. You've noted he lost one case after another. They gave him immunity for actions while in office. How do you think these extraordinary efforts to dismantle our democracy will fare in that Court?
Dahlia Lithwick: There's a couple of theories of how the Roberts Court thinks about Donald Trump. One is that this is just a really reactive court. The Roberts Court kept bonking him on the head time after time. He lost a lot because of slipshod work. Hated Trump until they got Biden, and they hated Biden so much that they gave Trump immunity, and they told Colorado that they couldn't take him off the ballot. So this is a court that is just gonna constantly ping off whatever executive there is, and we're going to see it become again the court that checks Trump. Now, I'm not super optimistic that's going to happen. I think what's going to happen, you mentioned the travel ban, Brooke. Don't forget. We had one iteration of the travel ban that was blatantly unconstitutional.
Brooke Gladstone: And then they rewrote it a few times.
Dahlia Lithwick: And finally the Court blessed it. I think that what frightens me a little is that the court is going to power wash the roughest edges off some of this. At some point, we're going to all say, well, the Court approved of the fourth iteration of the birthright citizenship claims, or the Court approved of the fourth iteration of the ban on trans soldiers, and then we will say it's constitutional. We really saw this. You mentioned the census case. The court essentially said, "Lie to us better. Give us a better pretext and come back." And that scares me because it means that the shoddy work that we're laughing at, this memo that was two pages gibbering on and there was no legal analysis, the Court will get a decent version of that someday, and that they may uphold.
Brooke Gladstone: Let's say the Roberts Court says, you can't use the army in these operations stateside. Could Trump just call out the Army anyway?
Dahlia Lithwick: So what you've described is actually the paradigmatic constitutional crisis, right? When the Court says, this is illegal, and the President says, I'm going to do it anyway. By the way, we've had those, and that's when democracy teeters. That's the thing I'm most afraid of. We know from the first Trump presidency when the court told him no, he stopped. Maybe we can take some solace from that, or maybe he's now surrounded in people who are not in the manner of even, oh my God, I'm gonna say it, like the noble Bill Barr.
Brooke Gladstone: Oh, good God.
Dahlia Lithwick: The noble Jeff Sessions who are not willing to do unconstitutional things. I don't know that the Pam Bondi's and the Kash Patel's and the Pete Hegseth's are going to be checks on him this time.
Brooke Gladstone: California's first ballot initiative for the November 2028 election has been filed and cleared for signature gathering, and that is a call for secession.
Dahlia Lithwick: I don't know what to say to that.
Brooke Gladstone: I guess I'm bringing it up because you were saying we have faced a constitutional crisis before. What was that?
Dahlia Lithwick: When the President is ordering troops into Little Rock.
Brooke Gladstone: Right, but he was ordering troops to Little Rock to uphold the law.
Dahlia Lithwick: In the face of a state that was refusing to--
Brooke Gladstone: Follow it.
Dahlia Lithwick: If the question is, how do we unwind this? I can't control for--
Brooke Gladstone: God damn it, Dahlia, what good are you?
Dahlia Lithwick: I don't know. I mean, I think there are ways to do this. The playbook here is to make us hopeless, to make us say, the courts are all corrupt, the law is whatever they say it is, the press are all liars, I don't care if they shut down the government, what good are they anyway? Maybe this is the hopeful lesson in this OMB memo. The reflexive thought was, Republicans in Congress are going to hate this, right? This is their infrastructure project. Well, that didn't happen. They didn't squawk. The people who squawked were the people who called their representatives and yelled and screamed and the thing was rescinded.
We are very, very powerful. The invisibility of government is the problem here. The sense that I have no idea how student loans work, I don't really know how Medicaid works, I have no understanding of what NIH is, I don't know how cancer research is funded, it's really easy to hate all those things until it's turned off. One of the things that I really loved about the discourse around these are the programs that are getting shuttered was all of the folks who came forward and said, here's what I do all day.
I love that. That's the discourse we have to have. We have to make government visible and urgent. When you hear this ridiculous, "We're going to make government shrink down so small you can drown it in a bathtub," nobody voted for that. That's why it's being done by executive fiat. So, dude, let's vote for government, and to have roads and bridges again, that doesn't seem like a really heavy lift unless we give up.
Brooke Gladstone: Dahlia, thank you so much.
Dahlia Lithwick: Brooke, it is always equal parts delightful and terrifying to be with you. Thank you.
Brooke Gladstone: Dahlia Lithwick is a senior editor at Slate, where she hosts the award-winning podcast Amicus, and she's the author of Lady Justice: Women, the Law, and the Battle to Save America. Coming up, will DeepSeek, the new Chinese AI program, pop America's AI bubble? If it does, what happens next? This is On the Media.
[music]
Brooke Gladstone: This is On the Media. I'm Brooke Gladstone. A recently developed AI chatbot from the Chinese tech startup DeepSeek is sending an urgent message to America's financial markets.
Ed Zitron: We've got a bit of a tech sell-off this morning and it's being caused by earth-shattering developments in the AI space.
Reporter: China's new AI chatbot is sending shockwaves through the tech industry, triggering a massive sell-off on Wall Street. The tech-heavy Nasdaq dropped more than 3% while the S&P 500 slid about 1.5%.
Reporter: Tonight, the biggest single-day loss for a stock ever.
Reporter: Shares of Nvidia, which makes the chips used to power artificial intelligence, plunging nearly $600 billion.
Brooke Gladstone: And this is because DeepSeek says its AI assistant is comparable to American behemoths, OpenAI, Google DeepMind, Anthropic, Meta, but theirs is done cheaper, faster, and with less advanced hardware. Since its release, DeepSeek said it was hit with a cyberattack and temporarily restricted new registrations, and OpenAI claims, shockingly not, that DeepSeek used OpenAI's intellectual property to make their model violating its terms of service.
An audit release Wednesday by NewsGuard, a US-based analytics firm, found that with news-related prompts, DeepSeek repeated false claims 30% of the time and provided vague or not useful answers 53% of the time, resulting in an 83% fail rate worse than its Western competitors that had an average fail rate of 62%. But Ed Zitron, host of the Better Offline podcast and writer of the newsletter, Where's your Ed at says yes, of course.
Ed Zitron: In the case of a model based in China and run in China, yes, they have biases that they're going to feed into this model. That's not surprising. It's also important to remember that previous tests of ChatGPT and Google Gemini have found them to be disinformation and misinformation machines as well.
Brooke Gladstone: When I spoke to him this week after the news broke about DeepSeek's release, he told me that AI had been overhyped for far too long.
Ed Zitron: It's important to know how little the AI bubble has been built on. It's always been this idea based on their narrative that America has built these big, beautiful, large language models that require tons of the most expensive GPUs, the biggest data centers, because the only way to do this was to just cram more training data into them. DeepSeek found a way to build similar models to GPT-4o, the underpinning technology of ChatGPT and o1, the reasoning model that OpenAI had built, and make them much, much, much cheaper. Then they published all the research as to how they did and open-sourced the models, meaning that the source code of these models is there for anyone to basically adapt.
Brooke Gladstone: Now, GPUs, it's an electronic circuit that can perform mathematical calculations at high speed. Is this the chip that Nvidia makes?
Ed Zitron: In a manner of speaking. GPUs, graphics processing units, they're used for everything from hardcore video editing to playing video games and such like that. Well, computer games in the case of a computer. Nvidia has a piece of software called CUDA. Now, all you need to know about that is GPUs traditionally were used for graphics rendering, what CUDA allowed, and it's taken them 15 years to do it or more. CUDA allowed you to build software that would run on the GPUs.
What Nvidia started doing was making these very, very, very powerful GPUs. When I say a GPU in a computer, that's a card that goes inside. The GPUs that Nvidia sells are these vast rack-mounted, they're huge and they require incredible amounts of cooling. Generative AI, so the technology under ChatGPT runs on these giant GPUs and it runs them hot. They are extremely power-consuming, which is why you've heard so many bad stories about, well, the environmental damage caused by these companies.
Brooke Gladstone: Right. DeepSeek didn't have access to Nvidia chips, or at least not unlimited access, partly because of the Biden sanctions and other things.
Ed Zitron: Now, before the sanctions came in, DeepSeek grew out of a hedge fund called High-Flyer. They had stockpiled these graphics processing units before the sanctions came in, but they'd also used ones that have less memory bandwidth. Nvidia could only sell handicapped ones to China, so they had a combination of these chips. They found ways around both training the models, feeding them data to teach them how to do things, and also the inference, which is when you write a prompt into ChatGPT, it infers the meaning of that and then spits something out, a picture, a ream of text, and so on and so forth. These constraints meant that DeepSeek had to get creative, and they did. Just to be clear, all of this is in their papers. People are running the models themselves now.
Brooke Gladstone: They can also modify them.
Ed Zitron: Exactly, and they can build things on top of them. Now, an important detail here is that one of the big hubbubs is that DeepSeek trained their V3 model competitive with the underlying technology ChatGPT, and they trained it for $5.5 million versus GPT-4o, which is the latest model, cost $100 million or more, according to Sam Altman. They kind of prove you don't really need to buy the latest and greatest GPUs. In fact, you don't even need as many of them as you thought because they only use 2,048 of them, as opposed to the hundreds of thousands that hyperscalers have. They're building all of these data centers because they need more GPUs and more space to put the GPUs and ways to cool them.
Brooke Gladstone: But didn't ChatGPT or the company that made it, OpenAI, just create, as you say, this new reasoning program a couple of months ago called o1? They say it can answer some of the most challenging math questions and it seemed to have put the company once again at the top of the heap. Are we saying that DeepSeek can do similar things?
Ed Zitron: Yes. Now, when you say it can do these very complex things, this is another bit of kayfabe from this industry.
Brooke Gladstone: What is kayfabe?
Ed Zitron: Kayfabe from wrestling, specifically, where you pretend something's real and serious when it isn't really. The benchmarking tests for large language models in general are extremely rigged. You can train the models to handle them. They're not solving actual use cases, and on top of that, o1 is unfathomably expensive. When it came out, the large discussion was, "Wow, only OpenAI can do this."
You had similar things with Anthropic, other companies, but OpenAI, top of the pile. That was why they were able to charge such an incredible amount and why they were able to raise $6 billion. Except now, the sneaky Chinese, and I mean that sarcastically, this is just good engineering. They managed to come along and say, "Not only can we do this 30 times cheaper," and to be clear, that number is based on DeepSeek hosting it, so we don't know who's subsidizing that, but nevertheless, not only can they do it cheaper, on top of that, they open source the whole damn thing.
Now anyone can build their own reasoning model using this, or they can reverse engineer it and build their own reasoning model that will run cheaper and on their service. Eventually, you're going to find cloud companies in America that will run these models, and at that point, where's OpenAI's moat? The answer is they don't have one, just like the rest of them.
Brooke Gladstone: DeepSeek isn't entirely open because it didn't say how they trained their AI.
Ed Zitron: Well, they didn't share their training data, they said how they trained it and they were actually extremely detailed.
Brooke Gladstone: But the question is, can we trust their numbers?
Ed Zitron: We don't have to. They published the source code, the research behind everything, and the last week has been crazy. You've seen so many people using it and the model works really well. There are versions of R1 that you can run on a MacBook. This is potentially apocalyptic for OpenAI because even if you don't trust DeepSeek, even if you say, "I do not trust their hosted model. The version that DeepSeek sells access to, I don't trust it." Which is fair. We don't know where it's run and we don't know who backs it, but you can self-host it, run it on a local thing, or you could run it using a GPU. You don't have to trust them. You can build your own. They explained how they trained it, they explained why it was cheaper in great detail. I've spoken to multiple experts who all say the same thing, which is, "Oh, oh, OpenAI."
Brooke Gladstone: Now China and America are in a hegemonic battle of generative AI, and it seems that this DeepSeek tech has upended our assumptions of how all this was going to go. You say that there was never really any competition among American AI companies.
Ed Zitron: Yes, that is the long and short of it. This situation should cause unfathomable shame within Silicon Valley. What happened was Anthropic and OpenAI have been in friendly competition, doing kind of similar things in different ways, but they're all backed by the hyperscalers. OpenAI, principally funded by Microsoft, running on Microsoft servers almost entirely until very recently, but paying discounted rates and still losing money. They lost $5 billion last year. They're probably on course to lose more this year.
Brooke Gladstone: OpenAI?
Ed Zitron: Yes. That's after $3.7 billion of revenue. Anthropic, I think they lost $2.7 billion last year. Google, Amazon, they back Anthropic. They just put more money in, and you'd think with all of that loss they would be chomping at the bit to make their own efficient models, right? What if I told you they didn't have to? They all had these weird blank check policies and the venture capitalists backing the competition, whatever that was, there's nothing they could do because based on the narrative that was built, these companies needed all this money and all of these GPUs, as that was the only way to build these large language models.
So why would any of them ever build something more efficient? The real damage that DeepSeek's done is they've proven that America doesn't really want to innovate. America doesn't compete. There is no AI arms race. There is no real killer app to any of this. ChatGPT has 200 million weekly users. People say that's a sign of something. Yes, that's what happens when literally every news outlet, all the time, for two years, has been saying that ChatGPT is the biggest thing without sitting down and saying, "What does this bloody thing do and why does it matter?" "Oh, great. It helps me cheat at my college papers."
Really, the biggest narrative shift here is that there was no cheaper way of doing this at the scale they needed to. Other than give Sam Altman and Dario Amodei more money, that's the CEO of Anthropic. All they had to do was just continue making egregious promises because they didn't think anyone would dare bring the price down. I think it's good that this happened because the jig is up.
Brooke Gladstone: You've been saying since early 2024 that generative AI had already peaked. Why did you think that then, and why do you think so now?
Ed Zitron: The reasoning models, what they do, just to explain, is they break down a prompt. If you say, "Give me a list of all the state capitals that have the letter R in them." If it says, okay, what are the states in America? What states have this? Then it goes and checks its work. These models are probabilistic. They're remarkably accurate. They would guess that if you say, "I need a poem about Garfield with a gun," it would need to include Garfield and a gun and perhaps a kind of gun, and it would guess what the next word was.
Now, to do this and train these models required a bunch of money and a bunch of GPUs, but also a bunch of data scraping the entire Internet, to be precise. To keep doing what they are doing, they would need four times the available information of the entire Internet, if not more.
Brooke Gladstone: Why?
Ed Zitron: Because it takes that much. To train these models requires you just shoving it in there and then helping it understand things, but to get back to one thing, these models are probabilistic. There is no fixing the hallucination problem. Hallucinations are when these models present information that's false as authoritatively true.
Brooke Gladstone: I thought they'd been getting much better at catching that stuff.
Ed Zitron: No, they haven't. The whole thing with the reasoning models is that by checking their work, they got slightly better. These models were always going to peter out because they'd run out of training data, but also, there's only so much you can do with a probabilistic model. They don't have thoughts. They guess the next thing coming, and they're pretty good at it, but pretty good is actually nowhere near enough.
When you think of what makes a software boom, a software boom is usually based on mass consumer adoption and mass enterprise-level adoption. Big companies, financial services, healthcare, they have very low tolerance for mistakes. If you make a mistake with your AI-- well, I'm not sure if you remember what happened with Knight Capital. That was with an algorithm. They lost hundreds of millions of dollars and destroyed themselves because of one little mistake.
We don't even know how these things fully function, how they make their decisions, but we do know they make mistakes because they don't know anything. ChatGPT does not know. Even if you say, "Give me a list of every American state," and it gets it right every time--
Brooke Gladstone: It's just pattern recognition.
Ed Zitron: Yes. It doesn't know what a state is. It doesn't know what America is. It is effectively saying, what is the most likely answer to this? Remarkably accurate probability, but remarkably accurate is nowhere near as accurate as we would need it to be. As a result, there's only so much they could have done with it.
Brooke Gladstone: You wrote, what if artificial intelligence isn't actually capable of doing much more than what we're seeing today? You said you believe that a large part of the AI boom is just hot air pumped through a combination of executive BSing and the media gladly imagining what AI could do rather than focus on what it's actually doing. You said that AI in this country was just throwing a lot of money at the wall, seeing if some new idea would actually emerge.
Ed Zitron: Silicon Valley has not really had a full depression. We may think the dot-com bubble was bad, but the dot-com bubble was they discovered E-commerce and then ran it in the worst way possible. This is so different because if this had not had the hype cycle it had, we probably would have ended up with an American DeepSeek in five years. The way Silicon Valley is classically worked is you give a bunch of money to some smart people and then money comes out in the end.
In the case of previous hype cycles that work like cloud computing and smartphones, there were very obvious places to go. Jim Covella over at Goldman Sachs, famously said, "Well, no one believed in the smartphone." Wrong. There were thousands of presentations that led fairly precisely to that. With the AI hype, it was a big media storm. Suddenly, Microsoft's entire decision-making behind this was they saw ChatGPT and went, "God damn, we need that in Bing. Buy every GPU you can find." It is insane that multitrillion-dollar market cap companies work like this.
They went, "Throw a bunch of money at it." Buying more, doing more, growing everything always works. I call it the rot economy, the growth at all-cost mindset. Silicon Valley over the years has leaned towards just growth ideas. Except, they've chased out all the real innovators. To your original question, they didn't know what they were going to do. They thought that ChatGPT would magically become profitable. When that didn't work, they went, "Well, what if we made it more powerful and bigger? We can get more funding that way."
Then they kept running up against the training data war and the diminishing returns, they went, "Agents. Agents sound good. Now agents is an amazing marketing term." What it's meant to sound like is a thing that goes and does a thing for you.
Brooke Gladstone: Just trying to get a plane reservation is a nightmare. If I could outsource that, I certainly would.
Ed Zitron: When you actually look at the products, like OpenAI's operator, they suck. They're crap. They don't work. Even now the media is still like, "Well, theoretically this could work." They can't. Large language models are not built for distinct tasks. They don't do things. They are language models. If you are going to make an agent work, you have to find rules for effectively the real world, which AI has proven itself. I mean real AI, not generative AI that isn't even autonomous is quite difficult.
Brooke Gladstone: Coming up, more from Ed Zitron. If you thought the gloves were off in what you just heard, just wait for part two. This is On the Media.
[music]
Brooke Gladstone: This is On the Media. I'm Brooke Gladstone. I'm having this long conversation about the intrusion of a new Chinese chatbot into America's AI market with Ed Zitron, host of the Better Offline podcast and writer of the newsletter, Where's your Ed at, so far, Ed has eviscerated Silicon Valley's business model and said that its AI is good for practically nothing. So I countered. I said, hey, it can write job application letters, it can write term papers, it can write poetry.
Ed Zitron: This is actually an important distinction. AI as a term has been around a while. Decades, actually. AI that you see in, like a Waymo cab, like an autonomous car, it works pretty well. Nothing to do with ChatGPT. What Sam Altman and his ilk have done is attached a thing to the side of another thing and said it's the same. ChatGPT is not artificial intelligence. It is not intelligent. I guess it's artificial intelligence in that it pretends to be, but isn't. The umbrella term of AI is old and has done useful things. AlphaFold.
Brooke Gladstone: AI that goes through an enormous amount of data that shows us how proteins interact and maybe help us develop cures for diseases.
Ed Zitron: Right, and that is not generative AI, just to be clear. What Sam Altman does as he goes and talks about artificial intelligence, and most people don't know a ton about tech, which is fine, but Altman has taken advantage of that, taken advantage of people in the media and the executives of public companies who do not know the difference, and said, "ChatGPT, that's all AI."
Brooke Gladstone: Sam Altman went before Congress and said, "We need you to help us help you so that AI doesn't take over the world."
Ed Zitron: They love talking about AI safety. You want to know what the actual AI safety story is? Boiling lakes, stealing from millions of people. The massive energy requirements that are damaging our power grid, that is a safety issue. The safety issue Sam Altman's talking about is what if ChatGPT wakes up? It's marketing. Cynical, despicable marketing from a carnival barker.
Brooke Gladstone: They always say, "Well, it never is going to generate a profit at the beginning. Even Amazon was not making money when it started."
Ed Zitron: They love that example. They say, "Oh, Amazon wasn't profitable at first." They were building the infrastructure and they had an actual plan. They didn't just sit around being like, "At some point, something's going to happen, so we need to shove money into it." They love to bring up Uber as well. Now, Uber runs on labor reviews, kind of a dog of a company. Then you can at least explain to a person why they might use it. "I need to go somewhere or I need something brought to me using an app," that's a business.
OpenAI by comparison, what is the killer app exactly? What is the thing that you need? What is the iPhone moment? Back then, to get your voicemail, you actually had to call a number and hit a button, but you could suddenly look at your voicemail and scroll through it. There were obvious things. What I'm describing here are real problems being immediately solved. You'll notice that people don't really have immediate problems that they're solving with ChatGPT.
Brooke Gladstone: Okay. Generative AI is incredibly unprofitable. $1 earned for every $2.25 spent. Something like that?
Ed Zitron: Yes. $2.35 from my last estimates.
Brooke Gladstone: OpenAI's board last year said they needed even more capital than they had imagined. The CEO, Sam Altman, recently said that they're losing money on their plan to make money, which is the ChatGPT Pro plan. What is that?
Ed Zitron: Their premium subscriptions have limits to the amount that you can use them. Well, their $ 200-a-month ChatGPT Plus subscription allows you to use their models as much as you'd like, and they're still losing money on them. The funny thing is, the biggest selling point is their reasoning models o1 and o3. o3, by the way, it's yet to prove itself to actually be any different other than just slightly better at the benchmarks and also costing $1,000 for 10 minutes. It's insane.
The reason they're losing that is because the way they've built their models is incredibly inefficient. Now that DeepSeek's come along, it's not really obvious why anyone would pay for ChatGPT Plus at all. The fact that they can't make money on a 200 buck a month subscription, that's the kind of thing that you should get fired from a company for. They should boot you out the door.
Brooke Gladstone: How does DeepSeek make money?
Ed Zitron: Well, that's the thing we don't know. It's important to separate DeepSeek the company, which is now growth of a Chinese hedge fund. We don't know who subsidizes them, but their models, which are open source, can be installed by anyone and you can build models like them. At this point, one has to wonder how long it takes for someone to just release a cheaper ChatGPT Plus that does most of the same things.
Brooke Gladstone: So what's the end game here?
Ed Zitron: The end game was Microsoft saw ChatGPT was big and went, "Damn, we got to make sure that's in Bing because it seems really smart." Google released Gemini because Microsoft invested in OpenAI, and they wanted to do the same thing. Meta added a large language model. They created an open-source one themselves, Llama. Everyone just does the same thing in this industry. Sundar Pichai went up at Google I/O, the developer conference, and told this story about how an agent would help you return your shoes, and how it would autonomously go into your email, get you the return thing, just hand it to you. It'd be amazing. Then ended it by saying, "This is totally theoretical."
All the king's horses and all the king's men don't seem to be able to get the Valley to spit out one meaningful mass market, useful product that actually changes the world, other than damaging our power grid, stealing from millions of people, and boiling lakes. Everyone is directionlessly following this. They're like, "We're all doing large language models, right? Just like they did with the metaverse." Now, Google did stay out of the metaverse, by the way. Microsoft bought bloody Activision and wrote metaverse on the side.
Mark Zuckerberg lost like $45 billion on the metaverse. Putting aside the hyperscalers, there were like hundreds of startups that raised billions of dollars for the metaverse, because everyone's just following each other. The Valley's despicable right now.
Brooke Gladstone: The Metaverse was Zuckerberg's effort to create some sort of multimedia space where people could live or something, right?
Ed Zitron: He was claiming it would be the next internet, but really he needed a new growth market. The metaverse was actually a symptom of the larger problem, the rot economy I talk about, which is everything must grow forever. Tech companies are usually good at it, except they've run out of growth markets, they've run out of big ideas. The reason you keep seeing the tech industry jumping from thing to thing, that when, as a regular person, you look at them and go, "This doesn't seem very useful."
What's happening is that they don't have any idea what they're doing and they need something new to grow, because if at any point the market says, "Wait, you're not going to grow forever?" Well, what happened to Nvidia happens. Nvidia has become one of the biggest stocks. It has some ridiculous multi-hundred percent growth in the last year. It's crazy. The market is throwing a hissy fit because guess what? The only thing that grows forever is cancer.
Brooke Gladstone: You make a great case that if this is not a competitive atmosphere here in the US, if something does happen, it'll probably be somewhere else.
Ed Zitron: Yes, but DeepSeek has proven that this can be done cheaper and more efficiently. They've not proven there's a new business model, they've not made any new features. There's an argument right now, a very annoying argument, where it says, "Well, as the price of a resource comes down, so the use of it increases." That's not what's going to happen here. No one has been not using generative AI because it was too expensive. In fact, these companies have burned billions of dollars doing so.
A third of venture funding, in 2024, went into AI. These companies have not been poor, and now we're in this weird situation where we might have to accept, "Oh, I don't know." Maybe this isn't a multitrillion-dollar business. Had they treated it as a smaller one, they would have gone about it a completely different way. They never would have put billions of dollars into GPUs. They might have put a few billion and then focused up, like how DeepSeek went. We only have so many and they only do so much, so we will do more with them. No, American startups became fat and happy. Even when you put that aside, there was never a business model with this. Come on, give me real innovation.
Brooke Gladstone: If this really is all early signs of the AI bubble bursting, what are the human ramifications of this industry collapsing?
Ed Zitron: I'm not scared of the bubble bursting. I'm scared of what happens afterwards. Once it becomes obvious that they can't come up with the next thing that will give them 10% to 15% to 20% revenue growth year over year, the markets are going to rethink how you value tech stocks. When that happens, you're going to see them get pulverized. I don't think any company shutting down. I think Meta is dead in less than 10 years just because they're a bad company.
There are multiple tech companies that just lose money, but because they grow revenue, that's fine. What happens if the growth story dies? I hope I'm wrong on that one. The human cost will be horrible.
Brooke Gladstone: Even outside of the depression that might be experienced in the tech world, there are so many pension funds out there that may have investments. You're painting a picture of the housing crash of 2008.
Ed Zitron: I actually am. I wrote a piece about that fairly recently. It was about how OpenAI is the next Bear Stearns. If you look back at that time, there were people at that time talking about how there was nothing to worry about, and using similar rhetoric, "Just wait and see. Things are always good, right? It's horrifying because it doesn't have to be like this. These companies could be sustainable. They could have modest 2% to 3% growth. Google as a company basically prints money. They could just run a stable good Google that people like and make billions and billions and billions of dollars. They would be fine, but no, they must grow. We are the users, our planet, and our economy too, we are the victims of the rot economy, and they are the victors. These men are so rich. Sam Altman, if OpenAI collapses, he'll be fine. He's a multi-billionaire with a $5 million car.
Brooke Gladstone: There's a $5 million car?
Ed Zitron: Yes, he has like a Koenigsegg Regera. Nasty little man.
Brooke Gladstone: Ed, thank you very much.
Ed Zitron: It's been such a pleasure. I loved it.
Brooke Gladstone: Ed Zitron is host of the Better Offline podcast and writer of the newsletter, Where’s your Ed at. We reached out to OpenAI for comment, but we didn't hear back. That's the show. On the Media is produced by Molly Rosen, Rebecca Clark-Callender, Candice Wang, and Katerina Barton. Our technical director is Jennifer Munson. Our engineer is Brendan Dalton. Eloise Blondiau is our senior producer and our executive producer is Katya Rogers. On the Media is a production of WNYC Studios. Micah Loewinger will be back next week. I'm Brooke Gladstone.
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.