If You Can’t Beat ’Em… Join ’Em? Journalism in an AI World
John Herrman: Every news organization is desperate for the next thing. Anything that might provide future revenue streams, that's a serious danger and I think it's returning with AI.
Brooke Gladstone: News outlets strike deals with AI companies hoping they'll work better than the disastrous collaborations of the past. From WNYC in New York, this is On the Media. I'm Brooke Gladstone.
Micah Loewinger: I'm Micah Loewinger. If you can’t beat them, join them. How one news outlet is partnering with a startup in an effort to make AI less racist.
Elinor Tatum: Garbage in, garbage out. If there are things that misrepresent our communities and what it's learning from, then what it's going to spit out is going to be misrepresentations.
Brooke Gladstone: How to earn big bucks by reanimating defunct domain names and filling them with AI sludge.
Kate Knibbs: There's a lot of posts about dream interpretation. It's just clearly written by an AI. The worst thing you've ever read in your life.
Micah Loewinger: It's all coming up after this.
Brooke Gladstone: From WNYC in New York, this is On the Media. I'm Brooke Gladstone.
Micah Loewinger: I'm Micah Loewinger. This week's show is all about the use of generative artificial intelligence in journalism.
Brooke Gladstone: People both pro and con grapple with whether to resist apps like ChatGPT or embrace them. We begin with an anecdote, the story of a late great blog called The Hairpin that underwent an unsettling transformation.
Kate Knibbs: When you went to The Hairpin, you were getting something you couldn't get anywhere else.
Micah Loewinger: Kate Knibbs is a senior tech writer for WIRED.
Kate Knibbs: It was part of the all-network, the collection of blogs that had a very writer-friendly sensibility. Jia Tolentino was an editor, Jazmine Hughes was an editor, Anne Helen Petersen-- it just had a murderer's row of really talented, distinctive voices. It never had a mass audience but the people who read it loved it. I've heard it compared to The Velvet Underground.
Micah Loewinger: Not that many people bought their album but every one of them started their own band kind of thing.
Kate Knibbs: Yes. Not that many people might have read it but everyone who did became a blogger. [laughs]
Micah Loewinger: I love that.
Kate Knibbs: This website was so special to me and so many other people.
Micah Loewinger: What happened to it?
Kate Knibbs: It just didn't succeed as a business and they decided to fold it.
Micah Loewinger: The story of digital media.
Kate Knibbs: Yes, the sad story.
Micah Loewinger: A couple of weeks ago, Kate heard through the grapevine that the site was mysteriously back online.
Kate Knibbs: Someone from The Hairpin world alerted me to the fact that it had been revived in this wholly bizarre way. It was just like generic content mill nonsense. There's a lot of posts that are about dream interpretation. It's just clearly written by an AI. The worst thing you've ever read in your life.
Micah Loewinger: Typically, I feel like these people who run these content mills aren't that eager to talk to the press or just want to go about their business without too much scrutiny. How did you figure out who now owns The Hairpin?
Kate Knibbs: I could tell you a lie about doing some sophisticated forensic digging and make myself sound really good but I'll just tell you the truth which is I emailed the email on the website and the owner wrote me back and just was very eager to talk about his project.
Nebojša Vujinović: You asked me about Hairpin. What? [laughs]
Micah Loewinger: This is the man who responded to her email, a Serbian entrepreneur named Nebojša Vujinović.
Nebojša Vujinović: You can call me Vujo because I'm Vujinović my second name so Vujo is okay.
Micah Loewinger: Kate Knibbs connected me with Vujo after she profiled him for WIRED this past week, an article titled Confessions of an AI Clickbait Kingpin.
Kate Knibbs: He actually told me that he was very surprised that I was asking about The Hairpin because it wasn't one of his top websites.
Nebojša Vujinović: The Hairpin is not top 20 maybe in my top 100 websites.
Kate Knibbs: He said he had over 2,000, although he did not provide me with a master list. Take that with a grain of salt but I checked that he owned at least a few dozen.
Nebojša Vujinović: I have much bigger websites than this one. It's nothing special for me.
Micah Loewinger: What are some other websites that you own that you're proud of?
Nebojša Vujinović: I'm not especially proud of anything I must tell you because--
Micah Loewinger: [laughs] Why not? This is your life's work. Why are you not proud?
Nebojša Vujinović: No, it's not. I can be proud on my kid. I can be proud of my songs that I write.
Kate Knibbs: He's also a pretty actually popular DJ in Serbia.
[MUSIC - Jelena Vučković & DJ Vujo: Misu moj]
Nebojša Vujinović: I'm singing that part of the song. That song was the most popular song in Serbia for more than two years. Also, it was a hit in Bosnia and Herzegovina, it was hit in Croatia.
Micah Loewinger: You're a celebrity in Serbia.
Nebojša Vujinović: Small celebrity.
Micah Loewinger: This is actually how Vujo got into the content mill game. In 2005 when he was still trying to make a name for himself as a DJ, he noticed that his personal site where he posted his music was getting more and more traffic.
Nebojša Vujinović: I get the idea, okay, I will write about, I don't know, house music and my purchasing housemusic.com, for example.
Micah Loewinger: He quickly learned that starting a new site from scratch, churning out blog posts that advertisers like is a ton of work, and getting people to stumble upon your site is hard too. If you Google house music, you'll probably see bigger older music sites first.
Nebojša Vujinović: One day I started to buy already established websites. Just imagine you buying websites of, I don't know, closed restaurant, and that restaurant have backlinks from New Yorker, from BBC, from, I don't know, Yellow Pages, from Forbes.
Micah Loewinger: If the restaurant was written about and linked on sites that Google considers to be high quality, then it's more likely to show up on the first page of Google results.
Nebojša Vujinović: Well, of course, it's not so simple but there is a bigger chance for websites like that to rank easier than another domain without backlinks.
Micah Loewinger: This became a big part of his business and it appears to be legal. Every day he says he hangs out on auction sites like GoDaddy looking for dead sites that he can scoop up.
Nebojša Vujinović: I'm buying established websites 1, 2, 3, 4, 5, 10 per day, every day.
Micah Loewinger: Which is how he ended up with an eclectic assortment of sites including photolog.com and early Spanish language competitor of Facebook and pope2you.net, a former official site of Pope Benedict the 16th, and of course The Hairpin.
Kate Knibbs: The most popular site in his table is another women's media site. Actually, it's called The Frisky. It was launched in the 2010s as well and at one point it was one of the most popular women's interest websites in the US. It was like a Cosmo style. There was a lot of sex content and dating advice and it went out of business in 2016. The domain was up for grabs at some point and he grabbed it.
Nebojša Vujinović: It was so popular. I think more than 10 people work every day on that website. Yes, real humans, we write about everything, especially about celebrities. Meghan Markle, she was pregnant and that was a huge story. Today, maybe The Frisky earning $100,000 per year. I don't know.
Micah Loewinger: We were not able to verify his earnings from The Frisky.
Kate Knibbs: He's making a lot of money off sex toy companies that still want to do sponsored posts or advertisements. I was looking at The Frisky search traffic and all of the top keywords are breast related. When people are searching for things on the internet related to bra sizes, it tend to send you to The Frisky. I think that helps keep the engine running.
Micah Loewinger: In the past year or so, Vujo's engine got a big new upgrade, generative AI.
Kate Knibbs: It just supercharged this weird spammy corner of the SEO industry. Instead of taking four hours to write 12 blog posts, all of a sudden, you can do that in 40 seconds. They primarily use ChatGPT. They just put in prompts and spit out articles and he does say that they fact-check them. I don't know how thorough the fact check is but there's some quality control going on to avoid putting something super offensive on the internet that would end up alienating potential advertisers.
Nebojša Vujinović: We don't publish anything about politics. We write about health. We write about fitness.
Kate Knibbs: It's not something that's super sustainable. He's already losing traffic on a lot of the big properties, including The Frisky because people figure out that it's AI-generated.
Micah Loewinger: The Frisky, the Pope website, it's a little silly but there is a slightly darker side to this which is that he is using the same business model on dead news websites.
Kate Knibbs: Yes. Honestly, the most shocking thing that he owned to me was the English language website for Apple Daily which is a very culturally significant pro-democracy newspaper that was based out of Hong Kong that was shut down in quite a dramatic fashion a few years ago.
Male Speaker 1: The newspaper has had financial trouble since its assets were frozen after the arrest of, of course, its founder Jimmy Lai, the billionaire media tycoon.
Kate Knibbs: He was a very, very outspoken critic of the Chinese government.
News clip: He's a frequent visitor to Washington and has been labeled by Beijing as a traitor
Kate Knibbs: Jimmy Lai is currently under arrest, as are several of his top editors.
News clip: Charged with "conspiracy to collude with foreign forces". His crime was running a media outlet that wouldn't tow the party line.
Kate Knibbs: Apple Daily was very important to the pro-democracy movement.
Micah Loewinger: I'm looking at it now. It's just appledaily.com, right?
Kate Knibbs: Yes, appledaily.com.
Micah Loewinger: Funny Cool Username Ideas - A Guide to Creating Memorable Online Handles. Then we got, Unlocking LeBron's Recovery Secrets. Under the heading, World, like this is like world news. Eight Tips to Take Your Healing Seriously. Then under actors, the actors heading, we see 45+ Happy Birthday Wishes for Teacher.
[laughter]
Kate Knibbs: They're not even trying to hide the fact that it's AI-generated, right? This is an important media outlet. It's really unsettling to see a news outlet emptied out and replaced by the complete opposite of what it stood for.
Micah Loewinger: Can I ask you about Apple Daily?
Nebojša Vujinović: Yes.
Micah Loewinger: Because I do think some of our listeners would be really disappointed to learn that this important website was shut down and is now posting AI content.
Nebojša Vujinović: I understand, but there is a lot-- I live in Serbia. I live in ex-Yugoslavia. There is a lot of things here that--
Micah Loewinger: Vujo's English isn't super clear here, but he went on to talk about growing up in Bosnia during the war, which he says destroyed his childhood. He referenced a hospital near his home that NATO bombed in the '90s, injustices that feel bigger to him than putting AI clickbait on a dead news website that the Chinese government shut down in Hong Kong.
Nebojša Vujinović: I'm not part of that story. There is a lot of bad things in this world. A lot of things is not right. If I buy some domain legal and create anything what I want, is it bad thing? Does it change anything if I put on that website, "peace in the world," is it change anything in the world? No, it's not. I think you understand what I want to tell you.
Micah Loewinger: I do understand. I don't think that you're responsible for the website going away, but AI is helping accelerate the death of journalism. Do you think about that at all?
Nebojša Vujinović: I'm afraid AI can be used for bad things. I'm not a fan. What is the opposite of fun? Hate. A hater?
Micah Loewinger: You hate AI?
Nebojša Vujinović: Maybe I hate, I especially if I can see people losing jobs because of AI. You're a journalist, you're afraid about your business because this striking journalism for sure, striking all writers, all content creators. I writing songs today, so yes, also I'm afraid it will make better music and play better music than me as a DJ. Yes, I understand, but--
Micah Loewinger: But of course, he talked about how useful and popular ChatGPT is. He cited a projection that I've seen quoted widely in the press that by 2025, 90% of online content could be generated by AI. Vujo, you are helping create an internet where there is less and less human on the internet? Is that an internet that you want to be on?
Nebojša Vujinović: No. Yes, I agree with you, but I hate also using cars with oil or petrol and destroying our planet.
Kate Knibbs: He said, "I like horses. I drive a car because we live in a society where you have to drive a car."
Nebojša Vujinović: Probably you don't also like destroying our planet, and you are still using car too.
Kate Knibbs: That's how he feels about AI like, this is the way things are going, so I'm going to go in the direction that the world is already moving.
Micah Loewinger: Can't beat them, join them. That's what I'm hearing you say.
Nebojša Vujinović: Something like that. That is good one. That's it.
Kate Knibbs: He's not sitting there being like, "I'm going to destroy a beloved independent women's media blog. I'm going to create this perverse desecration of this important pro-democracy Hong Kong news outlet." He is simply taking advantage of an opportunity that has been presented on the internet that has a very low barrier of entry, and that's it. He just wants to make money, and I think that's how a lot of the people who are making the internet worse operate.
Micah Loewinger: Kate, thank you very much.
Kate Knibbs: Thank you so much for having me.
Micah Loewinger: Kate Knibbs is a senior writer for WIRED. Her latest piece is titled Confessions of an AI Clickbait Kingpin.
Brooke Gladstone: Coming up, Vujo says, "If you can't beat them, join them," while the New York Times has other ideas.
Micah Loewinger: This is On the Media.
[MUSIC - Jelena Vučković & DJ Vujo: Misu moj]
Micah Loewinger: This is On the Media, I'm Micah Loewinger.
Brooke Gladstone: I'm Brooke Gladstone. Journalism has entered an era of love-hate relationships with AI. In December, The New York Times became the first major media organization to take a chatbot creator to court.
News clip: The New York Times suing OpenAI, the creator of ChatGPT, and Microsoft for copyright infringement. The Times says that millions of articles published in the paper were used to train automated chatbots that now compete with it as a source of reliable information.
News clip: The suit says that the defendants should be held responsible for "billions of dollars in statutory and actual damages".
Brooke Gladstone: OpenAI told NBC that it hopes to "find a mutually beneficial way to work together as we are doing with many other publications", and so it is.
News clip: OpenAI they deal with parent company of Politico and Business Insider, that's Axel Springer. The multi-year agreement compensating Axel Springer for the content OpenAI will use to generate answers on ChatGPT and train its models.
Brooke Gladstone: The Associated Press signed a similar deal with OpenAI earlier last year. Now, OpenAI is reportedly in talks with CNN, the Fox News Corporation, and Time to license their work. News Corp CEO Robert Thomson said in an earnings release earlier this week that the company much prefers "negotiation to litigation." On Monday, Microsoft which holds a major stake in OpenAI's for-profit arm and has the right to commercialize its inventions, announced partnerships with five news organizations; Semafor, the Craig Newmark, CUNY Journalism School, the Online News Association, GroundTruth, and Nota, itself an AI company designed for publishers. Of course, not all these deals are the same.
John Herrman: Well, there are two kinds of deals that we're hearing about, and they often get muddled together, which I think generally works to the benefit of OpenAI and Microsoft here.
Brooke Gladstone: John Herrman is a tech columnist for New York Magazine.
John Herrman: One kind of deal is with Semafor, with GroundTruth, with the Craig Newmark school at CUNY, the ONA, and Nota. These are deals that are providing access to AI tools for news gathering and news production to experiment with large language models, text generation tools to see if there's some way that these can make news production easier or quicker or more effective, or if there are ways to use AI to like dig into big datasets. That's all very interesting and appealing to think about as someone who works in media.
The other kind of partnership, which is much more consequential and also much more tense is the type of partnership that OpenAI has with Axel Springer, for example, which is the parent company for Business Insider and a bunch of German language publications. That involves OpenAI paying a licensing fee of tens of millions of euros over a few years to put Axel Springer news and content and analysis into OpenAI products like ChatGPT. That's the result of a little bit more of a negotiation to avoid conflict or potentially to avoid lawsuits. They're really two very different kinds of partnerships.
Brooke Gladstone: Then you've got in December, the New York Times lawsuit against OpenAI for copyright infringement. What was the Times's argument?
John Herrman: The Times filed what I think many of the industry see as the definitive and most credible lawsuit of its kind against an AI firm, alleging that OpenAI had trained its models on years and years of New York Times content, that this training was not covered under fair use. That not only was OpenAI using this data to create software that could compete with the New York Times product by creating articles that were pretty solid, but also that you could get ChatGPT to regurgitate full passages from published New York Times content, which challenges the core defense that OpenAI had mounted for months at that time, that these models don't contain information, they just contain statistical relationships between different things that can produce similar outputs.
Now, OpenAI says that that's a glitch and that they're fixing it. That this is a question that isn't simply resolvable by shouting fair use. This is new territory and that at the very least, there needs to be precedent set around this question.
Brooke Gladstone: I've seen the New York Times lawsuit framed as a fight for the future of journalism. Do you think this is an existential battle?
John Herrman: I think that, broadly speaking, the fact that these new AI technologies can automate at least the basic processes of a lot of what we think of as creative work does present a real threat to, if not the practice of journalism or being a musician, they do present a clear threat to, for lack of a better term, the business models of creativity. I think that's really, really obvious. I don't necessarily think that the fates of the New York Times and OpenAI tell the whole story. OpenAI is probably the premier AI firm in the public's mind right now, but lots of companies are developing very similar technologies
Brooke Gladstone: The New York Times is one of the nation's premier news outlets even if people frequently quibble over it so I would think if anything would determine the direction of where this would go, it might be this lawsuit.
John Herrman: I don't want to minimize the potential influence that they have here, but in the current media environment, the New York Times is also an interesting and strange outlier. I should disclose that I worked there for seven years. It's very large, it's doing very well, it's subscription-supported, one of a very small number of truly national news organizations. What matters for the New York Times doesn't necessarily matter for the rest of the news industry in a clear way, but I do think that the outcome of this lawsuit could set a valuable precedent. I also think it's worth reading the actual text of the lawsuit and OpenAI's response to get some background here, which is that they were in negotiations for a deal that might have been quite a bit like Axel Springer's deal, considering licensing options, what equitable fee might look like, and then things fell apart.
Brooke Gladstone: Would you say it's fair to conclude that the AI companies are motivated to partner with news in order to prevent similar kinds of lawsuits like that being brought by the New York Times?
John Herrman: I think that it's fair to read, for example, the Axel Springer deal as a way to both suggest that these partnerships are possible and also to say to other news organizations, "Hey, let's talk first," from the perspective of news organizations. The arrival of these generative AI tools was very abrupt, very threatening, and came on the tail end of a long and disappointing era of tech and media partnerships,
Brooke Gladstone: Right. You wrote in your piece that it's easy to fold such deals into the prevailing narrative of AI dominance. Has venerable publishers lined up to partner with tech firms once again, despite what happened last time around and the time before that and the time before that? You mentioned some deals in the past. In the 2010s, Facebook they approached news organizations like the New York Times and said, "Hey, now we're going to focus on videos. You should be doing video." In order to keep traffic flowing, news organizations did divert a lot of their increasingly diminishing resources to producing video to go on Facebook. Then what happened?
John Herrman: When Facebook started sending lots and lots of readers to news publishers and news publishers started adapting their strategies to cater to those visitors and to reach more people on social media, Facebook sensed an opportunity. In the mid-2010s, they were thinking, "Oh, we need to compete with YouTube. Everything is going to be video in the future. How can we build that out ourselves?"
One cheap way to do that was to partner with companies like the Times and say, "Hey, if you produce, for example, live video broadcast for us on a regular schedule for this period of time, we'll pay you a few million dollars. You will get lots and lots of viewership because our platform is now funneling people to these new video features. This is a win-win for you guys."
It all felt good at the time. What happens then is news organizations staffed up for live video, even if that wasn't something they were good at before. They produced these videos for a limited time. I think about a year. Then because it was companies like the New York Times and BuzzFeed industry leaders that were doing this, lots of smaller companies that didn't have direct partnerships, they think, "Oh, we should pivot to video too." You get these people chasing these somewhat artificial trends ending up out on a limb when Facebook decides that maybe live video isn't going to be the main thing that people see on Facebook. That was the recurring story of the 2010s.
Brooke Gladstone: Are the businesses of AI and journalism essentially compatible or not?
John Herrman: I think they should be considered essentially incompatible. These are very different types of firms doing different things, but with interest that sometimes align. The Times, in particular, has been fairly open to deals with companies like Google and Meta but has also been fairly cautious. It's a big institution. There's a lot of resistance to fundamental change there, which has worked out in their favor in this case.
They might take a few million dollars from Google to produce a series of VR videos that you have to view by putting your smartphone in a cardboard pair of goggles and they can do that without disrupting their business operations and maybe pocketing a little bit of money, but when they do that, it can often be mistaken for what everyone else has to do. In an industry where virtually every news organization is desperate for the next thing, anything that might provide future revenue streams, that's a serious danger. I think it's returning with AI.
Brooke Gladstone: Do you think that these deals are just about trying to cash in for now even if this thing kills them down the road?
John Herrman: Yes, I think that's a fair way to look at, for example, the Axel Springer deal. One thing it's worth pointing out here is that what OpenAI is paying for is the right to include links from articles and content from articles in products like ChatGPT. This contract is premised on the idea that everyone is going to be using this and, of course, they're also going to be using these chatbots to keep up with the news. There are a lot of predictions implicit in this deal that won't necessarily come true. Maybe chatbots aren't the future of news. In that case, Axel Springer looks pretty smart in hindsight.
The other possibility is that these AI technologies are going to find their way into virtually everything we use on the internet and the way that they collect and represent up-to-date information about the world is potentially a serious problem for them and something they're going to have to spend a lot of money on. In that case, in hindsight, Axel Springer might not look so smart. They might look like they gave something away for a lower price than they should have.
If the web is becoming ragged and full of spam and AI-generated content, if our real-time sources of information like Twitter or X and Instagram and Google search they're all becoming polluted, maybe having a consistent feed of reported reliable valuable information about the outside world is incredibly valuable to an AI firm in the future.
Brooke Gladstone: You've worked with BuzzFeed and at the New York Times when both companies were experimenting with new technologies and big tech partnerships. Déjà vu, maybe? Does this moment feel different?
John Herrman: This moment feels different than, for example, the era of rising social media because at least then there was a sense of synergy and collaboration. A bunch of people are using Facebook, but they're also reading news there. We make news and so maybe this works out somehow. Here we've got the arrival of new technologies that are just basically automating some of the basic functions of news production.
Now you can argue and I think convincingly that they're nowhere near capable of producing valuable stories, valuable analysis, but they're trying and so it's a little more antagonistic to start. This isn't about two industries aligning temporarily and then drifting apart inevitably. This is two industries smashing into each other right at the beginning of their relationship.
Brooke Gladstone: You said to our producer, "What were we supposed to do with Facebook? People did different things, but no one won."
John Herrman: Right. That's the tragedy of covering the media's relationship with tech for the last decade, is that people made a lot of mistakes, but even in hindsight, it wasn't clear what most news organizations were supposed to do. Social media took away what was left of their revenue models. It said, "Hey, we are a better advertising product than you are." "Hey, we're better at attracting huge numbers of readers than you are." What's left for you is the expensive work of gathering and publishing news.
Yes, there is some déjà vu here with AI tools where there are smarter decisions and there are unwise decisions that you might make now, but we are also at the beginning of potentially a pretty big change in how people interact with information. I have to be frank, it's scary.
Brooke Gladstone: Thank you very much, John.
John Herrman: Thanks for having me on.
Brooke Gladstone: John Herrman is a tech columnist for New York Magazine. As AI technologies advance, many critics observe that these tools replicate the prejudices of the data they train on.
News clip: One UC Berkeley professor was able to trick ChatGPT to write a piece of code to check if someone would be a good scientist based on their race and gender. A good scientist it found was white and male.
Brooke Gladstone: In 2020, a prominent researcher named Timnit Gebru said that Google fired her after she highlighted harmful biases in the AI systems that support Google's search engine. Today, she runs a research institute rooted in the belief that AI is not inevitable, its harms are preventable, and when it includes diverse perspectives, it can even be helpful, beneficial, but--
Timnit Gebru: We should have guardrails in place and we should make sure that the group of people involved in creating the technology resemble the people who are using the technology.
Brooke Gladstone: New players have joined the field to address that issue. In December, a startup called Latimer AI announced a licensing agreement with the largest and oldest Black newspaper in New York City, the New York Amsterdam News. The partnership began when Latimer's founder, John Pasmore approached an old friend, Elinor Tatum, the publisher and editor of the paper.
Elinor Tatum: It, to me, was a no-brainer because we know how our community can be so misrepresented in media in general, and because of that and the way large language models learn.
Brooke Gladstone: The biases are built in to the models. It scrapes the internet and there's a lot of real garbage out there.
Elinor Tatum: Garbage in, garbage out. If there are things that misrepresent our communities and what it's learning from, the idea of being able to be a part of something that is going to be able to give a correct narrative, I thought was something very important.
Brooke Gladstone: Lewis Latimer, I understand, was a Black inventor whose legacy and scientific contributions were often overlooked. That's who the company's named for. Tell me about this company, Latimer.
Elinor Tatum: They're working very hard to make sure that the information that is coming from sources that are Black is getting out there to the public. They are actually training the model, partially based upon the archives of the Amsterdam News going back to 1926. There may be some things that just weren't covered in other media that were covered by the Amsterdam News. If we look at the Central Park jogger case, for instance, we will see very different coverage coming out of the Amsterdam News than we would've seen out of any of the other newspapers. We may see a difference in what Latimer would produce versus another AI search because there would be very different information, even coverage of the Macy's Thanksgiving Day parade.
Brooke Gladstone: What do you have in your mind there?
Elinor Tatum: Because it used to start in Harlem.
Brooke Gladstone: I understand you spent several months on figuring out how to work together. Can you tell us anything about your arrangement?
Elinor Tatum: The actual agreement is confidential with Latimer, but I can say that what we have right now is not permanent and we will be renegotiating our relationship as we get a better understanding of what the real value is around the data. This is all very new, especially in terms of Latimer because they're very much a startup.
Brooke Gladstone: When you talk about an evolving relationship, do you expect to ever make any money out of it?
Elinor Tatum: I certainly believe that we will. There's definitely a number attached to it, and the model is going to be working in looking to be placed in places like HBCUs across the country as a starting point and go from there. They've already got relationships set up with several HBCUs around the country.
Brooke Gladstone: Latimer said in its press release that it's "constructing an LLM that represents the future of AI, where these models are built to better serve distinct audiences." Clearly, in this case, the distinct audience includes the countless people who've been served for over a century by the Amsterdam News and the historic Black colleges and universities. That's great. If you had the chance, Elinor, would you want to combat these built-in biases that your archive could help correct training a much bigger platform intended to reach nearly everybody like ChatGPT?
Elinor Tatum: Well, doesn't everyone have to start somewhere?
Brooke Gladstone: Yes, but if you had a chance, you'd go as big as you could.
Elinor Tatum: Well, I would like to see Latimer be as large or larger than ChatGPT or any of these because I believe that it could be with the right technology, with the right infrastructure, with the right information being inputted into it. You see all of the world needs to get the diversity that Latimer is going to provide. I am hoping that Latimer gets into every HBCU in the country to start with, and then to libraries across the country, public libraries, then the general public. They're already signing up. Just general internet users are using it already. I'm hoping that it's another commonplace usage, just like ChatGPT.
Brooke Gladstone: It's really refreshing to hear this perspective. It's unique, because it's not based on, well, if you can't beat them, join them. It's not focused on trying to have a more efficient operation based on fancy AI tools making lots of money, or even about losing less money at this point. It really is about improving the media ecosystem.
Elinor Tatum: Absolutely. I really feel strongly about Latimer because if you don't have the voices of the people that are being represented, you're not going to have a correct representation of people. That's why I feel it is so very important to have our voices included in all media, and that includes these large language models.
Brooke Gladstone: You have no fear of AI taking journalism down.
Elinor Tatum: I think everyone has some fears of it, but journalism is still very much needed and I want to make sure that there is information out there that is quality information that's going to be added to it. Now, does AI need some help? Are there a lot of issues? Yes. AI has a lot more learning that needs to be done and with every day, with every week, every month, and every year, advances are made and more advances need to be made, but it's an ever-evolving process and I'm looking forward to see what comes next. I'm very excited to be a part of it.
Brooke Gladstone: What sort of a future are you hoping to build together?
Elinor Tatum: Well, one that is long and lucrative but also one that is going to bring information to people that shows the true breadth with texture and color of our communities, that tells the stories and brings out the information that has been so long overlooked by other keepers of history so when people ask questions, they get the answers that aren't so easily found.
Brooke Gladstone: Elinor, thank you very much.
Elinor Tatum: Well, thank you for having me.
Brooke Gladstone: Elinor Tatum is the editor-in-chief of the New York Amsterdam News.
[music]
Micah Loewinger: Coming up with AI, it's easy and profitable to make highly trafficked and highly stupid conspiracy videos.
Brooke Gladstone: This is On The Media.
[music]
Brooke Gladstone: This is On The Media, I'm Brooke Gladstone.
Micah Loewinger: I'm Micah Loewinger. A couple weeks back ahead of the Democratic primary in New Hampshire--
Male Speaker 6: An AI-generated call is falsely telling Democratic voters not to vote in tomorrow's primary. Here's part of that false AI-generated call.
AI Joe Biden: What a bunch of malarkey. Do you know the value of voting Democratic on our votes count? It's important that you save your vote for the November election.
Micah Loewinger: This bogus call, which reached as many as 25,000 phones across the state, prompted the Federal Communications Commission this week to outlaw such AI phone fakery. The episode highlights how effective and effortless these AI tricks have become, and how those charged with combating them are always one step behind. This is especially true at TikTok, where videos of conspiracy theories, really dumb conspiracy theories are reaching millions of eyeballs and generating serious money.
Abbie Richards is a misinformation researcher and a senior video producer at Media Matters, a left-leaning watchdog group. She's been studying the viral tactics behind this growing cottage industry.
Abbie Richards: You start off by saying something that is utterly unhinged.
AI: Government just captured a vampire and tried to keep it a secret. On October 10--
Abbie Richards: Then what you do is you create usually a fake main character, typically an explorer or a scientist.
AI: Alejandro Suarez was an explorer from West Palm Beach.
Abbie Richards: You describe the adventure through which they make this discovery.
AI: Alejandro walked for an hour in the woods until he reached a large rusted security fence.
Abbie Richards: Then it all turns out to be a coverup. At that point, the goal is to really just waste time and tell a long story because you want it to be over 60 seconds long.
Micah Loewinger: My favorite one that you identified in your piece is the quote-unquote, "Joe Rogan" clip of him talking about some scientist who overheard a conversation about an asteroid that's going to destroy planet Earth or something and the government doesn't want us to know about it.
AI Joe Rogan: We are all probably going to die in the next few years. Did you hear about this? There's this asteroid that is on a collision course with Earth. Pull it up, Jamie. Apparently this--
Abbie Richards: My favorite thing about the AI Joe Rogan conspiracy theories, they almost always start with a clip of him talking into the mic. They're not even trying to dub it, so they put the captions right over his mouth. [laughs]
Micah Loewinger: Part of the reason those videos work is that, yes, the content is really absurd but it's also something you could imagine Joe Rogan being like, "Oh my god, dude, I just read this crazy--" It works, you know?
Abbie Richards: Oh, it does. I saw one that was him saying that the US stayed in Iraq because they were looking for a Stargate. I was like, you know what, I could imagine him saying this.
[laughter]
Micah Loewinger: As you mentioned, the fact that these videos are over 60 seconds is important to the people who are trying to monetize the videos because it plays a role in TikTok's creativity program. Can you describe that?
Abbie Richards: You have to be in an eligible country. You have to be at least 18 years old. You have to have at least 10,000 followers and you have to have at least 100,000 video views in the last 30 days. Once you join the creativity program, the videos that you produce that are over 60 seconds long are eligible for monetization.
Micah Loewinger: Tell me a little bit about the kinds of accounts that are sharing these videos and how many views they're getting.
Abbie Richards: I found accounts that were getting 20 million, 30 million views on some of these videos. We identified these two accounts. One was English language and one was Spanish language, both of which were receiving millions of views. They appeared to be affiliated, they had the same name translated in English and Spanish and they have the same profile picture. The English language account had received over 342 million views since it began posting in February of last year. Then the Spanish language one had received over 329 million views and it only started posting in September. That's just one account in each language and it's doing really well posting this AI voice conspiracy theory content. This account in particular very obsessed with Megalodons, fun fact.
Micah Loewinger: When we say AI-generated, there are multiple generative AI tools that are being used on each of these videos.
Abbie Richards: Yes. It varies depending on the creator and the video itself. All of the videos that I'm pointing to here are definitely using an AI text-to-speech program. That's how we're getting an AI Joe Rogan. That's also how we get this voice that I'm sure everybody has heard. His name is Adam. He's in the 11 Lab Software. He's reading a lot of these.
AI: Micah Loewinger is a chill guy who loves hanging out.
Abbie Richards: Then on top of that there's often AI-generated images in the video because just listening to AI Joe Rogan wouldn't really be that interesting. Sometimes it's all AI-generated images. Sometimes they are mixed with just regular images in the Discord servers where they talk about how to make this content and they share tips. They often will recommend using AI to help you write the script or come up with the ideas.
Micah Loewinger: When you say discord server, you're referring to the kind of cottage industry that rests on top of the actual videos and channels themselves. There are, as you said, whole Discord servers, medium articles, YouTube channels, and these hustle bro guru influencers who claim that they can help other TikTokers make it big. What are they preaching and what are they hawking?
Abbie Richards: It seems like a pretty classic get-rich scheme to me. They're offering courses or coaching one-on-one advice and feedback on your content, teaching you how to essentially create content that will go as viral as possible that you can monetize and then make money off of.
Micah Loewinger: You actually hung out in some of the Discord servers. What did you find?
Abbie Richards: They're talking about how to essentially make more money. One person said, for example, if it's a conspiracy channel post podcast clips about how they're poisoning the food supply and then link an affiliate product that is meant to detoxify the body. I'm like, I love when they just say what they're doing. It makes my job easy.
Micah Loewinger: They just spell it out. Putting aside the fact that a lot of these videos fall into this dumb occult genre of like vampires and like wendigos and these kinds of things, what are the major tells that some of these videos are AI-generated?
Abbie Richards: The voice is the first giveaway often, but then there's small details that are just wrong, like the wrong number of fingers or asymmetry distortion. If there's ever any text in the image, it's usually not any language that we've ever seen before. AI is still pretty bad at language. Also, they have a certain look to them. Like when you look at art and it makes you feel nothing.
Micah Loewinger: Yes, that's this. If it's so obviously fake, a lot of the people sharing them probably think that they're funny or they just think it's a captivating story and let's just be charitable here and assume that a lot of people are not convinced, then what's the harm?
Abbie Richards: The harm is that we are essentially pushing out content that teaches people to think about the world in a way that's really broken. It's a really unhelpful framework for understanding the world. Even if they know that the AI is AI, we still have a problem with viral conspiracy theories. It pulls us away from understanding how our world actually functions. I'm less concerned about what comes off as real and more concerned about just how easy it is to make this misinformation at scale
Micah Loewinger: And make money from it.
Abbie Richards: Yes. It's super profitable and you can just pump it out. The people that are making this content, a lot of them probably aren't even really deep believers in conspiracy theories. They're just following the money. We need to make sure that pumping out conspiracy theory content just isn't profitable.
Micah Loewinger: The 2024 presidential election is approaching fast and researchers have been voicing concerns over AI-generated misinformation and disinformation. There was of course that AI-generated Biden robocall. The FCC has just been granted the power to start pursuing legal actions against people who might be creating this stuff. On TikTok, how much political AI-generated content are you seeing? Have users been digging into this particular niche as a potential business model as well?
Abbie Richards: The type of people who make content about a dragon being discovered in Antarctica, they probably aren't as interested in niche political conspiracy theories because that's much more likely to be demonetized and it has a smaller audience. They're really going for just scraping as many people as possible. That's not to say I haven't seen a lot of political ones. Did you see the Biden tap water one?
Micah Loewinger: No.
AI: Joe Biden controls you through the water you drink. Yes, you heard that right.
Abbie Richards: It uses an AI-generated image of Joe Biden over a sink.
Micah Loewinger: If I drink Joe Biden's water, then he gets to control my actions or something? [laughs]
Abbie Richards: I think so. Honestly didn't follow the plot that much. They have laid out an entire framework and provided a vast amount of resources and YouTube instructional videos on how to make this sort of content that goes as viral as possible.
Micah Loewinger: You're saying the infrastructure that these people have created, the educational materials, the Discord servers, the how-to guides all over the place could be used by anyone for anything?
Abbie Richards: Yes. That is concerning when we pair that with an electorate that's already primed for lots of conspiracy theories and then we're mixing that with AI that can just create this content at a scale that like we've never seen before.
Micah Loewinger: You said that election misinformation has been a problem on TikTok in the past. Do you think the platform has learned any lessons and is equipped to moderate itself this time around?
Abbie Richards: Maybe they've learned some lessons, but I don't think that any platform should be walking into this election thinking that they're safe and that they have all their bases covered.
Micah Loewinger: Abbie, thank you very much.
Abbie Richards: Thank you so much for having me.
Micah Loewinger: Abbie Richards is a video producer at Media Matters. Her latest piece is titled TikTok Has an AI Conspiracy Theory Problem.
AI: Numerous studies have concluded that [unintelligible 00:49:45] actually causes depression and disrupts other harmonies in people's bodies, making them more susceptibly to suggestion and thus easier to manipulate.
Micah Loewinger: That's it for this week's show, On the Media is produced by Eloise Blondiau, Molly Rosen, Rebecca Clark-Callender, and Candice Wang with help from Shaan Merchant.
Brooke Gladstone: Our technical directors, Jennifer Munson, our engineers this week were Andrew Nerviano and Brendan Dalton. Katya Rogers is our executive producer. On the Media is a production of WNYC Studios. I'm Brooke Gladstone.
Micah Loewinger: I'm Micah Loewinger.
[music]
Copyright © 2024 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.