Brooke Talks AI With Ed Zitron
![](https://media.wnyc.org/i/800/0/l/85/2025/01/AP25028279819813.jpg)
( Andy Wong / AP Photo )
Brooke Gladstone: This is On The Media's midweek podcast, and I'm Brooke Gladstone. Lately the stock market's been rocking and rolling, but this week the AI industry especially has felt the ground shake with the intrusion of a new kid on their block. A Chinese model called DeepSeek, founded by a Chinese billionaire in 2023, which has broken through with technology that is comparable to the American behemoths. I'm talking about OpenAI, Google DeepMind, Anthropic, Meta's Llama AI, and Elon Musk's xAI, but ever so much cheaper to make and mind-bogglingly accessible to all. Ed Zitron, host of the Better Offline podcast and writer of the newsletter, Where's your Ed at has been warning that the AI bubble has been primed to burst for some time now. Ed, thanks for joining us.
Ed Zitron: My pleasure.
Brooke Gladstone: on Monday, the stocks of many major tech firms took a steep dive, including Alphabet, Microsoft, the AI chip maker, Nvidia, which I guess lost almost $600 billion in value, reportedly the biggest one-day loss in US history, though it's recovered a little. It all happened when news broke about a new relatively open Chinese AI model called DeepSeek-R. It was like a horror movie jump scare for the AI industry. Why?
Ed Zitron: It's important to know how little the AI bubble has been built on. If you look at these companies, Anthropic, OpenAI and then the competitive ones from Amazon, Google with Gemini for example, there's not actually been much behind them. It's always been this idea that America has built these big, beautiful, large language models that require all of this money based on their narrative. Tons of GPUs, the most expensive GPUs, the biggest data centers, because the only way to do this was to just cram more training data into them. What DeepSeek did, they found a way to build similar models to GPT-4o, the underpinning technology of ChatGPT and o1, the reasoning model that OpenAI had built, and make them much, much, much cheaper. Then they published all the research as to how they did and open-sourced the models, meaning that the source code of these models is there for anyone to basically adapt.
Brooke Gladstone: Now, GPUs, it's an electronic circuit that can perform mathematical calculations. Is this the chip that Nvidia makes?
Ed Zitron: In a manner of speaking. GPUs, graphics processing units, they're used for everything from hardcore video editing to playing video games and such like that. Well, computer games in the case of a computer. Nvidia has a piece of software called CUDA. Now, all you need to know about that is GPUs traditionally were used for graphics rendering, what CUDA allowed, and it's taken them 15 years to do it or more. CUDA allowed you to build software that would run on the GPUs. What Nvidia started doing was making these very, very, very powerful GPUs. When I say a GPU in a computer, that's a card that goes inside. The GPUs that Nvidia sells are these vast rack-mounted, they're huge and they require incredible amounts of cooling. They run hot.
The way that generative AI, so the technology under ChatGPT runs, it runs on these giant GPUs and it runs them hot. They are extremely computationally expensive and power-consuming, which is why you've heard so many bad stories about, well, the environmental damage caused by these companies.
Brooke Gladstone: DeepSeek didn't have access to Nvidia chips, or at least not unlimited access, partly because of the Biden sanctions and other things. They had to find other ways to cut down on the cost.
Ed Zitron: The sanctions made it so that what could be sold to Chinese companies was limited. Now, before the sanctions came in, DeepSeek, it grew out of a hedge fund called High-Flyer, which also should deeply embarrass the American tech industry that just a random outgrowth of a hedge fund just mapped them. Nevertheless, what they did was they had stockpiled these graphics processing units before the sanctions came in, but they'd also used ones that have less memory bandwidth. Nvidia could only sell handicapped ones to China, so they had a combination of these chips.
They found ways around both training the models, effectively feeding them data to teach them how to do things, and also the inference, which is when you write a prompt into ChatGPT, it infers the meaning of that and then spits something out, a picture, a ream of text and so on and so forth. These constraints meant that DeepSeek had to get creative, and they did. They got very creative. Just to be clear, all of this is in their papers. People are probably already working on recreating this. People are running the models themselves now. It's all for real.
Brooke Gladstone: They not only are running them, they can also modify them.
Ed Zitron: Exactly, and they can build things on top of them. Now, an important detail here is that one of the big hubbubs is that DeepSeek trained their V3 model competitive with ChatGPT. The underlying technology GPT, and they trained it for $5.5 million versus GPT-4o, which is the latest model, cost $100 million or more, according to Sam Altman. They kind of prove you don't really need to buy the latest and greatest GPUs. In fact, you don't even need as many of them as you thought because they only use 2048 of them, as opposed to the hundreds of thousands that hyperscalers have.
The reason that they're building all of these data centers is because they need more GPUs and more space to put the GPUs and ways to cool them. Indeed, Nvidia with latest Blackwell chips has found these things are in 3000-pound server. It's truly, genuinely really cool to look at. It's just the result that sucks.
Brooke Gladstone: Didn't ChatGPT or the company that made it, OpenAI, just create, as you say, this new reasoning program a couple of months ago called o1? They say it can answer some of the most challenging math questions and it seemed to have put the company once again at the top of the heap. Are we saying that DeepSeek can do similar things?
Ed Zitron: Yes. Now, when you say it can do these very complex things, this is another bit of kayfabe from this industry.
Brooke Gladstone: What is kayfabe?
Ed Zitron: Kayfabe from wrestling, specifically, where you pretend something's real and serious when it isn't really. The benchmarking tests for large language models in general are extremely rigged. You can train the models to handle them. They're not solving actual use cases. When o1 came out, the media, who should be ashamed of themselves, fell over themselves to say how revolutionary this was without ever asking, "What does this actually do? What can I build with it?" And on top of that, o1 is unfathomably expensive. When it came out, the large discussion was, "Wow, only OpenAI can do this. Only OpenAI is capable of making something like this."
You had similar things with Anthropic, other companies, but OpenAI, top of the pile. That was why they were able to charge such an incredible amount and why they were able to raise $6 billion. Except now, the "Sneaky" Chinese, and I mean that sarcastically, this is just good engineering. They managed to come along and say, "Not only can we do this 30 times cheaper," and to be clear, that number is based on DeepSeek posting it, so we don't know who's subsidizing that, but nevertheless, not only can they do it cheaper, but they can do it.
On top of that, they open-source the whole damn thing. Now anyone can build their own reasoning model using this, or they can reverse engineer it and build their own reasoning model that will run cheaper and on their servers. Kind of removing the need to deal with OpenAI entirely. The developers that I have talked to are extremely impressed with o1, but they're also extremely impressed with DeepSeek's o1. They're like, "Why would I pay this insane amount of money when I could not?" Eventually, you're going to find cloud companies in America that will run these models, and at that point, where's OpenAI's moat? The answer is they don't have one, just like the rest of them.
Brooke Gladstone: DeepSeek isn't entirely open because it didn't say how they trained their AI.
Ed Zitron: Well, they didn't share their training data. They said how they trained it and they were actually extremely detailed.
Brooke Gladstone: The question is, can we trust their numbers?
Ed Zitron: We don't have to. They published the source code, the research behind everything, and the last week has been crazy. You've seen so many people using it and the results speak for themselves. The model works really well. It is cheaper. There are versions of R1 that you can run on a MacBook. This is potentially apocalyptic for OpenAI because even if you don't trust DeepSeek, even if you say, "I do not trust their hosted model. The version that DeepSeek sells access to, I don't trust it." Which is fair. We don't know where it's run and we don't know who backs it, but you can self-host it, you can run it on a local thing, or you could run it using a GPU.
These things can be done safely. You don't have to trust them. You can build your own. They explained how they trained it, they explained why it was cheaper in great detail. I've spoken to multiple experts who all say the same thing, which is, "Oh, oh, OpenAI."
Brooke Gladstone: Now China and America are in an AI race. A hegemonic battle of generative AI, and it seems that this DeepSeek tech has upended our assumptions of how all this was going to go. Most of our assumptions, not your assumptions, because I've been reading you. You say that there was never really any competition among American AI companies.
Ed Zitron: Yes, that is the long and short of it. This situation should cause unfathomable shame throughout the tech media, but really within Silicon Valley. Did all of them just sit around twiddling their thumbs? No. What happened was Anthropic and OpenAI have been in friendly competition. They've both been doing kind of similar things in different ways, but they're all backed by the hyperscalers.
OpenAI, principally funded by Microsoft, running on Microsoft servers almost entirely until very recently, but paying discounted rates to Microsoft and still losing money. They lost $5 billion last year. They're probably on course to lose more this year.
Brooke Gladstone: OpenAI?
Ed Zitron: Yes. That's after $3.7 billion of revenue. Anthropic, I think they lost $2.7 billion last year. You'd think, with all of that loss, they would be chomping at the bit to make their own efficient models. What if I told you they didn't have to? What if I told you that there was no real competition between them? Google, Amazon, they back Anthropic, they just put more money in. They all had these weird blank check policies, and the venture capitalists backing the competition, whatever that was, there's nothing they could do because based on the narrative that was built, these companies needed all this money and all of these GPUs, as that was the only way to build these large language models.
Why would any of them ever build something more efficient when they could all keep doing the same thing, this big, nasty, lossy, wasteful thing? They were always going to get more money because this is the only way we can fight China when China builds the thing. The real damage that DeepSeek's done is they've proven that America doesn't really want to innovate. America doesn't compete. There is no AI arms race. There is no real killer app to any of this. ChatGPT is only popular because AI is the new thing. ChatGPT has 200 million weekly users. People say that's a sign of something.
Yes, that's what happens when literally every news outlet, all the time, for two years, has been saying that ChatGPT is the biggest thing without sitting down and saying, "What does this bloody thing do and why does it matter?" "Oh, great. It helps me cheat at my college papers. It can hallucinate stuff. Oh, great." Now, on top of this, and really the biggest narrative shift here is that everything has been predicated on the fact that we had to spend this money, that there was no cheaper way of doing this at the scale they needed to. There was nothing we could do other than give Sam Altman more money and Dario Amodei more money, that's the CEO of Anthropic.
All they had to do was just continue making egregious promises because they didn't think anyone would dare compete, anyone would dare bring the price down. I don't think OpenAI believed anyone could do reasoning like this. I think it's good that this happened. I'm glad it happened because the jig is up.
Brooke Gladstone: Bloomberg reported last month that the big names in AI tech, OpenAI, Google, Anthropic, are struggling to build more advanced AI. You've been saying since early 2024, that generative AI had already peaked. Why did you think that then, and why do you think so now?
Ed Zitron: The reasoning models, what they do, just to explain, is they break down a prompt. If you say, "Give me a list of all the state capitals that have the letter R in them," it will then-- there's a whole different technical thing I won't go into there. You can see it thinking, and I must be clear, these things aren't thinking, but it thinks through the steps and says, okay, what are the states in America? What states have this? Then it goes and checks its work. Nevertheless, these models are probabilistic. They're remarkably accurate in the sense that they would guess that if you say, "I need a poem about Garfield with a gun," it would need to include Garfield and a gun and perhaps a kind of gun, and it would guess what the next word was.
Now, to do this and train these models required a bunch of money and a bunch of GPUs, but also a bunch of data, scraping the entire Internet, to be precise. Not just the common crawl, which is a large file of a bunch of internet sites, but basically anything they could find. There was a rumor that OpenAI was literally transcribing every video from YouTube and then using the text from that.
Brooke Gladstone: My husband has written a lot of books. He's on that list. My daughter has written a couple of books. She's on that list. I wrote a comic book and they used that too.
Ed Zitron: Jesus, it's just disgusting.
Brooke Gladstone: A minuscule speck in the universe of stuff that they used. I didn't feel particularly special, but I'm just saying I know how universal their use of this stuff was.
Ed Zitron: Here's the crazy thing though. Even with all of your stuff and everything on the internet, to keep doing what they are doing, they would need four times the available information of the entire internet, if not more.
Brooke Gladstone: Why?
Ed Zitron: Because it takes that much. To train these models requires you just shoving it in there and then helping it understand things, but to get back to one thing, these models are probabilistic. There is no fixing the hallucination problem. Hallucinations are when these models present information that's false as authoritatively true.
Brooke Gladstone: I thought they'd been getting much better at catching that stuff.
Ed Zitron: No, they haven't. The whole thing with the reasoning models is that by checking their work, they got slightly better. I think people forget how amazing the human brain is. The mistakes we make are just fundamentally different. We don't make mistakes because each thing we're doing is a guess. We make mistakes because we're falling apart constantly. We're all dying and our bodies are hell, at least in my case. The thing I'm saying is these models were always going to peter out because they'd run out of training data, but also, there's only so much you can do with a probabilistic model. They don't have thoughts.
They are probabilistic. They guess the next thing coming, and they're pretty good at it, but pretty good is actually nowhere near enough. When you think of what makes a software boom, a software boom is usually based on mass consumer adoption and mass enterprise-level adoption. Now, the enterprise referring to big companies of like 10,000, 50,000, 100,000 people, but down to like 1,000. Nevertheless, financial services, healthcare, all of these industries, they have very low tolerance for mistakes. If you make a mistake with your AI-- well, I'm not sure if you remember what happened with Knight Capital. That was with an algorithm. They lost hundreds of millions of dollars and destroyed themselves because of one little mistake.
We don't even know how these things fully function, how they make their decisions, but we do know they make mistakes because they don't know anything. They do not have knowledge.
Brooke Gladstone: They are not conscious. I get it.
Ed Zitron: No, no, no, no. Not just not conscious. They don't know anything. ChatGPT does not know. Even if you say, "Give me a list of every American state" and it gets it right every time.
Brooke Gladstone: It's just pattern recognition.
Ed Zitron: Yes, it is effectively saying, "What is the most likely answer to this?" It doesn't know what a state is. It doesn't know what America is. It doesn't know anything. It is just remarkably accurate probability. Remarkably accurate is nowhere near as accurate as we would need it to be. As a result, there's only so much they could have done with it.
Brooke Gladstone: You wrote, "What if artificial intelligence isn't actually capable of doing much more than what we're seeing today? What if there's no clear timeline when it'll be able to do more? What if this entire hype cycle has been built, goosed by a compliant media to take these people at their word?" You said you believe that a large part of the AI boom is just hot air pumped through a combination of executive BSing and the media gladly imagining what AI could do rather than focus on what it's actually doing.
You said that AI in this country was just throwing a lot of money at the wall, seeing if some new idea would actually emerge. Are we talking about agents that can actually do things for you in a predictable way? Are we talking about God? What is the new idea that they were hoping would emerge from throwing billions at this project?
Ed Zitron: Silicon Valley has not really had a full depression. We may think the dot-com bubble was bad, but the dot-com bubble was they discovered E-commerce and then ran it in the worst way possible. This is so different because if this had not had the hype cycle it had, we probably would have ended up with an American DeepSeek in five years. DeepSeek, the way it is trained, uses synthetic data, which makes it very good at things with actual answers, things like coding and math. The way Silicon Valley is classically worked is you give a bunch of money to some smart people and then money comes out the end.
In the case of previous hype cycles that work like cloud computing and smartphones, there were very obvious places to go. Jim Covella over at Goldman Sachs, famously said one of the responses to generative AI was to say, "Well, no one believed in the smartphone." Wrong. There were thousands of presentations that led fairly precisely to that. With the AI hype, it was a big media storm. Suddenly, Microsoft's entire decision-making behind this was they saw ChatGPT and went, "God damn, we need that in Bing. Buy every GPU you can find." I'm telling the truth, it is insane that multitrillion-dollar market cap companies work like this.
Nevertheless, all of these companies, they went, "Well, this is the next big thing, throw a bunch of money at it." That's worked before. Buying more, doing more, growing everything always works. I call it the rot economy, the growth at all-cost mindset. Silicon Valley over the years has leaned towards just growth ideas. What will grow, what can we sell more of? Except, they've chased out all the real innovators. To your original question, they didn't know what they were going to do. They thought that ChatGPT would magically become profitable. When that didn't work, they went, "Well, what if we made it more powerful and bigger? We can get more funding that way," so they did that.
Then they kept running up against the training data war and the diminishing returns, they went, "Agents. Agents sound good. Now agents is an amazing marketing term." What it's meant to sound like is a thing that goes and does a thing for you.
Brooke Gladstone: Just trying to get a plane reservation is a nightmare. If I could outsource that, I certainly would.
Ed Zitron: When you actually look at the products, like OpenAI's operator, they suck. They're crap. They don't work. They don't work. Even now the media still like, "Well, theoretically this could work." They can't. Large language models are not built for distinct tasks. They don't do things. They are language models. If you are going to make an agent work, you have to find rules, but you also have to find rules for effectively the real world, which AI has proven itself. I mean real AI, not generative AI that isn't even autonomous is quite difficult.
Brooke Gladstone: Give me an example of where we are right now with AI. You're saying it can't do anything. It can write term papers.
Ed Zitron: This is actually an important distinction. AI as a term is one thing, and it has been around a while, decades actually. AI that you see in like a Waymo cab, like an autonomous car, it works pretty well. Then you look at Tesla and you go, "It works less well," because the problem is not being able to drive along a road. It's the one-edge case. Training for edge cases is so difficult. Like the very rare cases that nevertheless end up with someone dying. Now, AI is an umbrella term, and what you see in like a Waymo, for example, nothing to do with ChatGPT. ChatGPT is generative AI, transformer-based models.
What Sam Altman and his ilk have done is attached a thing to the side of another thing and said it's the same because they want the markets and journalists to associate them so that they don't have to build real stuff.
Brooke Gladstone: When you say they've attached a thing to another thing, give me some sort of concrete way to visualize what that means.
Ed Zitron: ChatGPT is not artificial intelligence. It is not intelligent. I guess it's artificial intelligence in that it pretends to be, but isn't. The umbrella term of AI is old and has done useful things, like AlphaFold protein folding, these are things that actually exist and help with diseases.
Brooke Gladstone: These are things that go through an enormous amount of data to find proteins that can enable us perhaps to develop medicines to combat diseases.
Ed Zitron: Right, and that is not generative AI, just to be clear. The versions of Siri before Apple intelligence, not generative AI, different kind of AI. Waymo cabs, AI. Agorithms, you could refer to them as AI. Those have been around a while. Generative AI is separate. What Sam Altman does as he goes and talks about artificial intelligence-- most people don't know a ton about tech, which is fine, but Altman has taken advantage of that, and particularly taken advantage of people in the media and the executives of public companies who do not know the difference between any of these things and said, "ChatGPT, that's all AI."
Brooke Gladstone: Sam Altman went before Congress and said, "We need you to help us help you so that AI doesn't take over the world."
Ed Zitron: Oh, it's so funny as well when he says that as well. They love talking about AI safety. You want to know what the actual AI safety story is? Boiling lakes, stealing from millions of people, running and burning energy. The massive energy requirements that are damaging our power grid, that is a safety issue. The safety issue Sam Altman's talking about is what if ChatGPT wakes up and does this? It's marketing. Cynical, despicable marketing from a carnival barker. Sam Altman is a liar and it's disgraceful how far he's gone.
Brooke Gladstone: Let's talk for a moment about the environmental impact. It eats a huge amount of energy. Apparently, according to the International Energy Agency, a request made through ChatGPT consumes 10 times the electricity of a Google search. To cool all of those incredibly hot GPUs requires fresh water. That's in a world where a quarter of the humanity already lacks access to clean water. It ends up creating a lot of toxic electronic waste. Its power is often generated by burning fossil fuel. The cost-benefit analysis here doesn't look that great.
Ed Zitron: It isn't.
Brooke Gladstone: You say it's incredibly unprofitable anyway. They always say, "Well, it never is going to generate a profit at the beginning. Even Amazon was not making money when it started."
Ed Zitron: They love that example. They love to bring up Uber as well. Now, Uber runs on labor abuse. Also Uber, in their worst year, lost around $6 billion. You want to know what year that was? It was 2020 when their business stopped working because no one went outside. Now, they say, "Oh, Amazon wasn't profitable at first." They were building the infrastructure and they had an actual plan. They didn't just sit around being like, "At some point something's going to happen, so we need to shove money into it." With Uber, for example, yes, kind of a dog of a company. Horrible company.
Nevertheless, you look at what Uber does and you can at least explain to a person why they might use it. "I need to go somewhere or I need something brought to me using an app," that's a business with a thing. Even then, it required so much labor abuse and still does. OpenAI by comparison, what is the killer app exactly? What is the thing that you need to do with OpenAI? What is the iPhone moment? Back then, to get your voicemail, you actually had to call a number and hit a button, but you could suddenly look at your voicemail and scroll through it. You could skip the beginning, text people in a natural way versus using the clunky buttons of a Nokia 3210. There were obvious things.
The earliest days of Uber in San Francisco, it was really difficult to get a cab anywhere. What I'm describing here are real problems being immediately solved. You'll notice that people don't really have immediate problems that they're solving with ChatGPT, other than Sam Altman solving the problem of how does he get worth a few more billion.
Brooke Gladstone: Okay. Generative AI is incredibly unprofitable. $1 earned for every $2.25 spent. Something like that?
Ed Zitron: Yes. $2.35 from my last estimates.
Brooke Gladstone: OpenAI's board last year said they needed even more capital than they had imagined. The CEO, Sam Altman, recently said that they're losing money on their plan to make money, which is the ChatGPT Pro plan. What is that?
Ed Zitron: This is where the funny stuff happens. OpenAI's premium subscriptions make up about-- it's like 73% of their revenue. The majority of their revenue does not come from people actually using their models to do stuff, which should tell you everything, because if most of their money doesn't come from people using their "Very useful" allegedly models, well, that means that they're either not charging enough or they're not that useful.
Brooke Gladstone: I think Altman said he wasn't charging enough.
Ed Zitron: He isn't charging enough, but their premium subscriptions have limits to the amount that you can use them. Well, their $ 200-a-month ChatGPT Plus subscription allows you to use their models as much as you'd like, and they're still losing money on them. The funny thing is, the biggest selling point is their reasoning models o1 and o3. o3, by the way, is their new thing that is just throwing even more compute at the problem. It's yet to prove itself to actually be any different other than just slightly better at the benchmarks and also costing $1,000 for 10 minutes. It's insane.
The reason they're losing that is because the way they've built their models is incredibly inefficient. Now that DeepSeek's come along, it's not really obvious why anyone would pay for ChatGPT Plus at all. But the fact that they can't make money on a S200 a month subscription, that's the kind of thing that you should get fired from a company for. They should boot you out the door.
Brooke Gladstone: How does DeepSeek make money?
Ed Zitron: Well, that's the thing we don't know. It's important to separate DeepSeek the company, which is now growth of a Chinese hedge fund. We don't know who subsidizes them.
Brooke Gladstone: Anybody can use their program for free.
Ed Zitron: Yes. They also released a consumer-focused app where anyone can use it for free, and that's 100% subsidized and we do not know how. Their models, which are open source, can be installed by anyone and you can build models like them. At this point, one has to wonder how long it takes for someone to just release a cheaper ChatGPT Plus that does most of the same things.
Brooke Gladstone: You described at the top that this bubble is going to burst, right?
Ed Zitron: Yes.
Brooke Gladstone: How do you know? Why is it inevitable?
Ed Zitron: I feel it in my soul. Nothing is inevitable. However, these models are not getting better. They are getting around the same in different ways.
Brooke Gladstone: Give me an example of that.
Ed Zitron: I mean they're only getting better at benchmarks, their actual ability to do new stuff, nothing new has happened. Look at what happened with Operator, their so-called agent. Operator is OpenAI's. It doesn't work. It sucks. You can use it if you use the ChatGPT Plus subscription for example.
Brooke Gladstone: Have you ever tried to use it?
Ed Zitron: Yes. It doesn't work very well. It sometimes does not understand what it's looking at and just stops. It's just really funny that this company is worth $150 billion. Every single time they release a product, there's no new capabilities. Operator, by comparison for OpenAI, is pretty exciting in the sense that it did something slightly different and also doesn't work. It's just this warmed-up crap every time with them.
Brooke Gladstone: I was just going to say, so what's the end game here?
Ed Zitron: The end game was Microsoft saw ChatGPT was big and went, "Damn, we got to make sure that's in Bing because it seems really smart." Google released Gemini because Microsoft invested in OpenAI, and they wanted to do the same thing. Meta added a large language model. They created an open-source one themselves, Llama. They did that because everyone else was doing GPT and transformer based models and generative AI. Everyone just does the same thing in this industry. No one was thinking, "Can this actually do it?" They all tell the same lies. Sundar Pichai went up at Google I/O the developer conference and told this story about how an agent would help you return your shoes. Took a few minutes. He went through this thing talking about how it would autonomously go into your email, get you the return thing, just hand it to you. It'd be amazing. Then ended it by saying, "This is totally theoretical."
They are making stuff up, have put a lot of money into it, and now they don't know what to do. All the king's horses and all the king's men don't seem to be able to get the Valley to spit out one meaningful mass market, useful product that actually changes the world, other than damaging our power grid, stealing from millions of people, and boiling lakes.
Brooke Gladstone: Last week, Trump announced a $500 billion AI infrastructure plan called the Stargate Project, along with the CEOs of OpenAI, Oracle and SoftBank. Though it was announced at the White House, it's privately funded. President Trump still wants to put his brand on it. Is this about him wanting to tap into that sweet, sweet masculine energy?
Ed Zitron: No, it's just Trump doing what Trump does, which is getting other people to do stuff and then taking credit. There is no public money in it. In fact, there's another little wrinkle. It's not $500 billion, it's up to $500 billion. Another weird detail as well. OpenAI has pledged to give $19 billion, according to the information, which they plan to raise through an equity sale and debt. Their largest round they've raised is $6 billion. Their company loses $5 billion a year. What is going on? Do I have to talk about this nonsense with a straight face? It's all a performance. They're all tap dancing. Then DeepSeek came along and did this and freaked them out so much because the whole thing's hollow. It's an actual glass onion.
Brooke Gladstone: After the announcement of the Stargate Project, Elon Musk took to social media to criticize the project. After all, he has his own AI empire. I wonder, does DeepSeek threaten all AI tech companies equally?
Ed Zitron: Every single one, because they're all building the same thing. There's very little difference between GPT-4o, OpenAI's model, and Anthropic's Claude Sonnet 3.5, and Google, Gemini. There are various versions of them, but they're all kind of the same. DeepSeek's V3 model is the one that's competitive with all those, and it's 50 times cheaper. It's so much cheaper and now it's open source, so everyone can build that. Now, Elon Musk's situation is even weirder. He just bought 100,000 GPUs with Dell. Very bizarre partnership with Michael Dell there.
Brooke Gladstone: Dell makes them?
Ed Zitron: Dell makes the server architecture, they go inside. Dell had the data center, but nevertheless, Grok and xAI. Grok being the chatbot that is attached to Twitter. It's not actually really obvious what any of that is meant to do. Kind of similar to how the AI additions to Facebook and WhatsApp and Instagram don't really make any sense either, but it's actually good to bring that up because everyone is directionlessly following this. They're like, "We're all doing large language models. We're all going to do the same thing." Just like they did with the metaverse. Now, Google did stay out of the metaverse, by the way. Microsoft bought bloody Activision and wrote metaverse on the side.
Mark Zuckerberg lost like $45 billion on the metaverse. Putting aside the hyperscalers, there were like hundreds of startups that raised billions of dollars for the metaverse, because everyone's just following each other. The Valley's despicable right now. It's full of people that build things because everyone else is building something. They don't build things to solve real problems. I think this is actually a larger economic problem too. Why is AI in everything? No one bloody knows.
Brooke Gladstone: The Metaverse was Zuckerberg's effort to create some sort of multimedia space where people could live or something, right?
Ed Zitron: He was claiming it would be the next internet, but really it was just a bucket of nonsense. It was just a bunch of stuff that he did because he needed a new growth market. The metaverse was actually a symptom of the larger problem, the rot economy I talk about, which is everything must grow forever. Tech companies are usually good at it, except they've run out of growth markets, they've run out of big ideas. The reason you keep seeing the tech industry jumping from thing to thing, that when, as a regular person, you look at them and go, "That seems stupid," or, "This doesn't seem very useful."
What's happening is that they don't have any idea what they're doing and they need something new to grow, because if at any point the market says, "Wait, you're not going to grow forever?" Well, what happened to Nvidia happens. Nvidia has become one of the biggest stocks. It has some ridiculous multi-hundred percent growth in the last year. It's crazy. The market is throwing a hissy fit because guess what? The only thing that grows forever is cancer.
Brooke Gladstone: What about the people who say, "Just give us time, it's going to happen"? You make a great case that if this is not a competitive atmosphere here in the US, if something does happen, it'll probably be somewhere else.
Ed Zitron: Yes, but what might happen elsewhere doesn't mean that they're going to find a multitrillion-dollar market out of this. DeepSeek has proven that this can be done cheaper and more efficiently. They've not proven there's a new business model, they've not made any new features. There's an argument right now, a very annoying argument, where it says, well, if the price comes down, that means that more things will happen because more people will say-- Jevon's Paradox was quoted by Satya Nadella. Jevon's Paradox says, well, "As the price of a resource comes down, so the use of it increases." That's not what's going to happen here.
No one has been not using generative AI because it was too expensive. In fact, these companies have burned billions of dollars doing so. A third of venture funding, in 2024, went into AI. These companies have not been poor, and now we're in this weird situation where we might have to accept, "Oh, I don't know." Maybe this isn't a multitrillion-dollar business. Had they treated it as a smaller one, had they said this might be like a $50 billion industry, they would have gone about it a completely different way. They never would have put billions of dollars into GPUs. They might have put a few billion and then focused up, like how DeepSeek went.
We only have so many and they only do so much, so we will do more with them. No, American startups became fat and happy. Even when you put that aside, there was never a business model with this. Come on, it's just so dull. Give me real innovation, not this warmed-up nonsense.
Brooke Gladstone: You say that here big tech, it just clones and monopolizes?
Ed Zitron: Yes. What they wanted, I believe, with this, was to create the large language model monopoly by creating the mystique that said, "Okay, these models have to cost this much." The only way they can run is when we have the endless money glitch, and the only way we can build them is with the biggest GPUs you've got. That myth allowed them to tap dance as long as they wanted while also excluding others. Because others, how would you possibly build a model like GPT-4o? You don't have all the GPUs and all the money. Except now maybe that might not be the case. Now, I don't know, people aren't feeling so good out there.
Brooke Gladstone: If this really is all early signs of the AI bubble bursting, what are the human ramifications of this industry collapsing?
Ed Zitron: There will be tens of thousands of people laid off just like happened in 2022 and 2023 from major tech companies. On top of that, I'm not scared of the bubble bursting. I'm scared of what happens afterwards. Once this falls apart, the markets are going to realize that the tech industry doesn't really have any growth markets left. The reason that tech companies have had such incredible valuations is because they've always been able to give this feeling of eternal growth. That software can always magic up more money. It's why the monopolies are so powerful. They've had just endless cash to throw at endless things.
Like Google own the marketplace where you buy ads, the marketplace where you sell them, and the way you host the ads themselves. They've never really had to work particularly hard other than throw money at the problem and exclude other people. Once it becomes obvious that they can't come up with the next thing that will give them 10% to 15% to 20% revenue growth year over year, the markets are going to rethink how you value tech stocks. When that happens, you're going to see them get pulverized. I don't think any company shutting down. I think Meta is dead in less than 10 years just because they're a bad company.
Right now the markets believe that tech companies can grow forever and punish the ones that don't. There are multiple tech companies that just lose money, but because they grow revenue, that's fine. What happens if the growth story dies? Like a 2008 housing crash, but specifically for tech, and I fear it. I hope I'm wrong on that one. The human cost will be horrible.
Brooke Gladstone: Even outside of the depression that might be experienced in the tech world, there are so many pension funds out there that may have investments. You're painting a picture of the housing crash of 2008.
Ed Zitron: I actually am. I wrote a piece about that fairly recently. It was at Sherwood, and it was about how OpenAI is the next Bear Stearns. If you look back at that time, and you mentioned the people that might say I'm wrong, that I should just wait, that these things are working out, you could see stories like from David Leonhardt over at New York Times and others talking about how there's not going to be a housing crash. Maybe even a crash would be good, but there's nothing to worry about. There were people at that time talking about how there was nothing to worry about, and they were doing so in detail and using similar rhetoric, "Just wait and see. Things are always good."
"What goes up never comes down," that's the phrase, right? It's horrifying because it doesn't have to be like this. These companies could be sustainable. They could have modest 2% to 3% growth. Google as a company basically prints money. They could just run a stable good Google that people like and make billions and billions and billions of dollars. They would be fine, but no, they must grow. We are the users and our planet too, our economy too, we are the victims of the rot economy, and they are the victors.
These men are so rich. Sam Altman, if OpenAI collapses, he'll be fine. He's a multi-billionaire with a $5 million car. These people, they're doing this because they know they will be fine, that they'll probably walk into another cushy job. It's sickening and it's cynical.
Brooke Gladstone: Well, that's the end. I just have a couple more informational questions.
Ed Zitron: Of course.
Brooke Gladstone: One is, you talked about the transformer-based models. I still didn't get what that was.
Ed Zitron: It's just the underlying technology behind every single large language model. In the same way like servers underpin cloud computing.
Brooke Gladstone: My other question is, there's a $5 million car?
Ed Zitron: Yes, he has like a Koenigsegg Regera. Nasty little man.
Brooke Gladstone: Ed, thank you very much.
Ed Zitron: It's been such a pleasure. I loved it. I'll come back whenever.
Brooke Gladstone: Ed Zitron is host of the Better Offline podcast and writer of the newsletter, Where’s your Ed at. Thanks for listening to On The Media's midweek podcast. Check out the big show on Friday where we'll be talking about lots of things, including posing that perennial question about what the President just did. Is that legal? You can find the show, of course, right here and it posts around dinner time on Friday. Bye.
[00:43:57] [END OF AUDIO]
Copyright © 2025 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.