BOB GARFIELD: The first word on this week's LA earthquake was probably delivered by the morning news team at KTLA Channel 5 who felt the tremor and ducked under their anchor desk.
[CLIP]:
CORRESPONDENT: Thank you. Coming up, more problems for a troubled…
CORRESPONDENT: Earthquake, we’re having an earthquake!
[END CLIP]
BOB GARFIELD: But the first reporter to write the story was fearless, because it was an algorithm. Specifically, it was a program called Quakebot, which harvested data from the US Geological Survey alerts and then spun it into text. That allowed the LA Times to release news of the quake faster than any other media outlet. The Times later updated the story using human journalists, but the scoop was undeniably robotic. Nick Diakopoulos, a Tow Fellow at Columbia Journalism School, says experimenting with automated reporting is a growing trend in the newsroom.
NICK DIAKOPOULOS: I know the LA Times also has a bot that collects homicide data and tries to generate stories based on that. There’s another project out of Temple University, where they’re looking at education data that’s been scraped about public schools and trying to sort of generate snippets of stories.
BOB GARFIELD: Is it belaboring the obvious to ask, what's the advantage for publishers to use bot-generated reporting?
NICK DIAKOPOULOS: I think the hope is that these techniques won't put people out of a job, but they’ll sort of increase the entire size of the pie, in terms of the content that gets generated, and also increase the ability for publishers to cover stories that might not have been covered otherwise.
BOB GARFIELD: Okay, so far it's all thumbs up. But as algorithms increasingly intrude into areas arbitrated by human beings, what else should we be concerned about?
NICK DIAKOPOULOS: When you have an algorithm that's generating content for you, what are the editorial biases of that algorithm? Could that algorithm be censoring something or filtering it or emphasizing a particular aspect of a story through the way it’s programmed. So that’s one thing to be wary of.
Another thing is having some transparency for the provenance of the data that’s driving these algorithmic content generators. So I think what was interesting about the Quakebot at the LA Times is that at the bottom of the story it said very clearly, this story was created by an algorithm written by the author of this story, and that label also mentioned that the data came from the USGS.
You know, by telling people where the data came from, it gives them an extra sort of signal about the credibility or the trustworthiness of this thing.
BOB GARFIELD: Garbage in, garbage out; biased data in, biased story out.
NICK DIAKOPOULOS: Exactly. And when you get into questions of security and bots that might be generating financial stories, if the data stream is hacked in some way or otherwise subverted, would that then affect the content that’s generated by the algorithm and, in turn, could that content perhaps affect markets, if that data quality were skewed in a particular way?
BOB GARFIELD: Can we peek over the horizon and see anything like robotic news judgment?
NICK DIAKOPOULOS: There have been systems, prototype systems that have been developed that have looked at generating summaries of sporting events using online media data. And what they’ll do is they’ll actually try and create a different perspective on the same event, to sort of show, what is a summary of this game from one particular team's point of view versus what is the summary of that same game from a different team’s point of view? You could imagine this algorithm as embodying two different sets of potential editorial criteria where it’s trying to cover one perspective versus another.
BOB GARFIELD: A Fox algorithm and an MSNBC algorithm reporting on the same event, yielding very different stories.
NICK DIAKOPOULOS: Absolutely.
BOB GARFIELD: Now, you're not just some Eisenhower saying beware of the publisher software complex. You are actually, yourself, going into algorithms, reverse engineering them to understand how they might, for example, bias or censor the news. How are you doing that?
NICK DIAKOPOULOS: If you imagine the way a, a biologist studies a cell, they would poke it a thousand times and see what happens to that cell. You can sort of poke and prod an algorithm, not a hundred or even a thousand times, but a million times and see how it reacts and record both of those prods and the reactions as data and start to hone in on and clarify what some of the editorial criteria of the algorithm might be, how the algorithm might be censoring certain things, if the algorithm is making mistakes that might be consequential, how the algorithm is tuned to perhaps prefer one type of mistake over another type of mistake.
Of course, we can’t sell everything. You know, to get a more holistic view of how an algorithm is operating, talking to the designers of the algorithms, finding out what kinds of design decisions were made, what the intent was of the design, and so on. And I think by sort of combining that human level understanding of the social system that gave rise to the algorithm, with this sort of reverse engineering approach can start to really help us clarify how those algorithms are operating.
BOB GARFIELD: Today, to confront a story that is flawed, we rely on truth squatters and media critics to locate individual errors of omission and commission, and so forth. But you're actually envisioning being able to go into the central consciousness that creates a story and maybe head off any bias problems at creation.
NICK DIAKOPOULOS: I don't actually believe that bias is a problem that can be corrected. I think it's something that we can be aware of.
To the broader issue of using algorithms to articulate bias, there’s a very interesting project going on at the Data & Society Research Institute in New York, where they’re looking at using bots as almost like a heat-seeking missile against misinformation. So you can imagine a lot of people on Twitter are out there tweeting about how vaccines cause autism. Now, this is something that's been debunked in the, you know, mainstream scientific literature, yet the meme sort of persists online.
And so, what this researcher Tim Hwang is doing is thinking about how you could almost create a bot army to go out and recognize misinformation and then have the bot army suggest that they might be wrong and that they should look at some other news story on the subject or some other science on the subject and almost creating an immune system of algorithms.
Now, of course, this opens all kinds of other questions.
BOB GARFIELD: Oh, it certainly does [LAUGHS], not the least of which is that immune systems are subject to the mutation of viruses. So wouldn’t your bot armies of truth be confronted with perhaps larger bot armies of falsity?
NICK DIAKOPOULOS: Yeah, we really are sort of heading toward this Star Wars future, aren’t we, where [LAUGHS] wars are waged with bots, and perhaps the war of ideas will also be waged with bots. Look, we don't really know where this is all heading, and and, some of these ideas are still in, in the research labs. It's refreshing to me that people are experimenting with this and trying different things. Perhaps people will become hardened, perhaps there's other strategies that would actually get them to moderate their views, to some extent.
BOB GARFIELD: Nick, thanks you so much.
NICK DIAKOPOULOS: Thanks for having me.
BOB GARFIELD: Nick Diakopoulos is a Tow Fellow at the Columbia School of Journalism.