BOB GARFIELD: This is On the Media. I’m Bob Garfield. A few months ago an internal memo at Google yielded a surprise. The company was helping deploy artificial intelligence for drone imaging for the Pentagon. This did not sit well internally. Thousands of employees, including senior engineers, issued a petition demanding that Google cancel the contract. A dozen angry Googleteers even went so far as to resign from their plum jobs on principle. Kate Conger is a senior reporter at Gizmodo where she covers technology and policy. Kate, welcome to the show.
KATE CONGER: Thank you for having me.
BOB GARFIELD: All right, let's begin with this Google drone initiative. What is Project Maven?
KATE CONGER: Project Maven is a pilot program that the Department of Defense launched last year. What they want to do basically is take all of this video footage that’s gathered by drones and run artificial intelligence against it to classify objects, to say, okay, this house is a house, this car is a car, this person is a person. The idea is that they're collecting so much footage they could never have a human analyst go through all of it and do that classification, so they're hoping that artificial intelligence can help support human analysts as they kind of go through this footage and glean actionable intelligence from it.
BOB GARFIELD: Why is it a concern of Google employees any more than any other military contract might be?
KATE CONGER: There’s a concern that once you start bringing computer decision making into this process, you're sort of entering a chain of events where eventually a computer can decide whether or not to take a human life and to initiate a drone strike.
BOB GARFIELD: We’re getting into the Dr. Strangelove scenario or Fail Safe where the computers are taking away human agency.
KATE CONGER: Yes, exactly.
BOB GARFIELD: And when [LAUGHS] the employees found out about it, more or less by accident, by the way, a lot of them freaked. When the protest petition was sent to management, what was the reaction?
KATE CONGER: You know, there's been a mix of things. There has been some management defensiveness of Google's decision to work on Pentagon contracts. Another aspect of the response has been, you know, we’re listening to employees, we’re going to learn from this and we’re going to create some ethical guidelines for this project moving forward.
BOB GARFIELD: Now, wasn't there something about, hey, we sell the same technology to civilian customers, so what's the objection?
KATE CONGER: Right. So the underlying technology that Google is providing for Project Maven is something called tensor flow, and it's an open source software. So you could use it, I could use it, anyone could use it. And so, that's been part of this, as well. If you put the software freely available, how can you say, oh, we don't want the Department of Defense using it?
I do think that that defense is a little bit of a red herring in this case because it's not as though the Department of Defense just went and downloaded the software off the internet without Google's knowledge or consent. Google has proactively engaged on this project. They've accepted a contract for this project, they've accepted millions of dollars for this project and they’ve provided technical support to help make this work for the Department of Defense.
BOB GARFIELD: Amazon has sold recognition software, not to the military but to law enforcement. Can you tell me about what the misgivings are about that deal?
KATE CONGER: Yeah. Basically, Amazon is selling facial recognition software to local police departments and it's sort of similar to the software intent that is happening in Maven, right? It’s the idea of looking at a bunch of video footage and classifying it very quickly and very effectively. And I think that the concern with that, again, is that, you know, it's happening un-transparently and that it’s going to be deployed very widely here in cities in America, where people won't necessarily know that this facial recognition software is being used on them. And there’s also concerns, particularly with facial recognition, around racial bias.
BOB GARFIELD: You go to a protest, you’re scanned by the cameras, even media cameras, and law enforcement can use this technology to figure out who exactly you are.
KATE CONGER: You go to a protest, sure, but maybe you also go to a coffee shop or you have some kind of crime committed against you and you go to law enforcement for help and you end up in a facial recognition database. One of the powers and one of the great dangers with using artificial intelligence in these contexts is that it speeds up analysis so much. Normally, if someone wanted to track you around a city, they might have to go to some pretty serious lengths to do that. They might have to assign a patrol officer to follow you, to use some kind of physical surveillance. And when you're talking about using artificial intelligence, it’s very easy to just track someone without a lot of human effort.
BOB GARFIELD: All right, that’s recognition. That’s Amazon's product. There is a bidding process ongoing now for something called the Joint Enterprise Defense Infrastructure, JEDI. What’s that?
KATE CONGER: So JEDI is a major cloud contract that the Pentagon is putting up for bid. They want to contract with a single cloud services provider for the entire Department of Defense. That would include things like email. That would also include things like AI. There is a petition going around from Tech Workers Coalition asking Amazon, IBM, Google not to compete for this contract. The concerns there are somewhat similar to the employee concerns around Maven but, you know, JEDI is a much broader contract. Part of this contract is, is whether or not the Pentagon has reliable email. The argument to say, we don't want to insert AI into the kill chain process is a little bit more clear than to say, you know, we don't want the military to have access to a good email service.
BOB GARFIELD: The tech workers at Google, their statement, in part, reads, “We believe that tech companies should not be in the business of war and that we as tech workers must adopt binding ethical standards for the use of AI that will let us build the world we believe in. Google should break its contract with the Department of Defense.” Another petition, launched by the Tech Workers Coalition, which represents workers from across the industry, said, quote, “We represent a growing network of tech workers who commit to never just follow orders but to hold ourselves, each other and the industry accountable.”
You never saw something like that from the employees of General Electric or General Dynamics. [LAUGHS] What is it about the tech business that has invited this much, let’s say, moral scrutiny?
KATE CONGER: I think that the tech industry right now is going through a really intense phase of reckoning. Over the last year or so, you know, every major Silicon Valley company that I cover has gone through some kind of awful scandal. And so, I think a lot of employees are starting to question, what are we creating here, what are we making here? When you create this platform you think, oh, look at this amazing product I made, it’s going to be so useful for people, it’s going to fundamentally make the world a better place, right? And then the internet Research Agency comes along and runs all of these fake political ads and does all of this work to disrupt democracy and you're like, oh, well, I've sort of made it possible for something like that to happen.
There is this really strong sentiment of concern right now of trying to think more ethically, think more carefully about the platforms that we’re creating and how they're going to be used and abused in the future.
BOB GARFIELD: It seems to me that underlying this entire conversation is that gradually we are, indeed, on a slippery slope to a soulless totalitarian dystopia through these tech tools. Please explain to me that I shouldn't be completely in despair.
KATE CONGER: [PAUSE] I mean, I wish I could. [LAUGHS] I think that there is a lot of concern within the industry right now about where we’re going. Every time I talk to people who are experts in the field of AI, they keep telling me, this is coming much, much faster than you think. They're saying five years, ten years. Like, this is not far down the road at all, to have a society where AI is really dominant. And so, there's a lot of frantic and urgent conversation happening right now in the academic community and in the technology community about what kind of rules and policies do we want to set up around the use of AI because if we don't make those decisions now, you know, the clock is running out.
You know, there’s conversations going on right now today about whether or not AI should have rights, once it becomes sentient, which sounds very crazy to laypeople like me. It’s like, why, what, how can we talk about having machine learning have rights, like you would think about a person having rights? But, you know, there are people in this industry who feel like this is coming very quickly and that we need to be making these decisions today.
BOB GARFIELD: Kate, thank you very much.
KATE CONGER: Thank you.
BOB GARFIELD: Kate Conger is a senior reporter at Gizmodo where she covers technology and policy.
[MUSIC UP & UNDER]