The Role of AI in the Russia-Ukraine War
![](https://media.wnyc.org/i/800/0/l/85/2022/03/RussiaUkraineWar_3Y71W2l.jpg)
( Efrem Lukatsky / AP Photo )
Brian Lehrer: When Russia invaded Ukraine, one of the questions it raised was about whether this war would prove to be a testing ground for artificial intelligence, for better or for worse. Beyond deep fake technology and Russia's misinformation assault, AI and machine learning can be found in weaponry and intelligence on both sides. Russia has flaunted, for example, a powerful drone that one of my next guests writes, can identify targets using artificial intelligence.
Meanwhile, Ukraine has put a controversial facial recognition software to use. We'll spend a few minutes now digging a bit deeper into the role of AI in this conflict and its implications for beyond this conflict. With me now, Will Knight, senior writer for Wired covering artificial intelligence and Gregory Allen, director of the AI Governance Project at the Center for Strategic and International Studies. Hi, Will. Hi, Gregory. Welcome to WNYC.
Will Knight: Hi, there.
Gregory Allen: Hi. Great to be with you.
Brian Lehrer: Will, I want to start with one of your recent articles called As Russia Plots Its Next Move and AI Listens to the Chatter. As I understand it, Russian soldiers in Ukraine used an unencrypted channel and though you're right, it isn't clear whether Ukrainian troops intercepted the communication, somehow the radio transmission was captured and then transcribed, translated, and analyzed using several artificial intelligence algorithms developed by an American company called Primer or Primer. I guess my question is, since the sum of those radio communications equates to vast amounts of data, does that mean these models and algorithms need to be refined and worked basically on the fly in a war footing?
Will Knight: Absolutely, yes. One of the things we're seeing with the situation in Ukraine is, as you say, technologies that have come out of the private sector, artificial intelligence algorithms, being used to comb through a lot of open source information. Sometimes this is videos and photographs posted to social media. In this case, it was the Russians, for whatever reason, using unencrypted conversations. You're exactly right. As Primer will tell you, they're trying to develop and market this technology to intelligence agencies, but the the idea would be that the military would update those algorithms on the fly and have them recognize different elements within conversations and be able to comb through a morass of information that a normal intelligence officer wouldn't be able to handle.
Brian Lehrer: For people who even my question left them in the dust, this is all techno gibberish to them, what are the implications of this either for war or for peace?
Will Knight: Oh, well, that's a great question. I think one of the big implications, I guess, is that we are, in some sense, heading towards a brave new world where a lot of new technologies are going to shake up the way wars are fought. We're starting to see glimpses of that, but I think there are bigger shifts happening, whether it's these AI enabled drones or facial recognition, there's a lot of technologies that can be developed much more quickly, they're going to be more accessible and they're going to change the nature of weapons technology. They may also change some of the dynamics of which countries have the most military power.
Brian Lehrer: On the other side, Gregory, Reuters reported that Ukraine has employed facial recognition software developed by the company Clearview. What is Clearview AI and how is Ukraine using its technology, because facial recognition software is controversial almost everywhere, but Ukraine, I guess, has to do what it can to defend against Russia whose offensive capabilities vastly outmatch theirs? What would you say about facial recognition, what it is and isn't in war, or the risks and responsibilities of using it even in a combat situation?
Gregory Allen: Well, I think Clearview, to begin, is a provider of AI enabled services and specifically services related to facial recognition. If you use their platform and you provide imagery, whether through a device that they provide or through just a regular internet connection, they can provide you their best guess, and it's an AI generated guess, as to the actual identity information of the face that you provide them.
The Ukrainian Armed Forces have been using services provided by Clearview and most notably, this is, as you mentioned, some great reporting by Reuters and some others, in order to identify the identity of Russian soldiers, whether those that are captured, in transit, or even deceased by taking a picture of the corpse of the individual. In the case of some of the corpses, Ukraine has been seeking to use this as part of an information operation in order to pierce Russia's propaganda and control of its own media ecosystem in order to notify the family members of the deceased Russian soldier about that death, because this is one of the pieces of information that is very closely guarded by the Russian military, the rates of casualties that they are experiencing and who is dying.
That's one of the things that is perhaps most remarkable about this war, is the degree to which even in really closed information ecosystems like Russia's especially, after the beginning of the war, there's still this opportunity for communication to occur not just between individuals, but between governments and citizens of other countries. Ukraine is taking advantage of AI's ability to identify folks who don't provide identifying credentials and to link that to social media accounts or other ways of getting in touch with people back in Russia.
Brian Lehrer: I guess the opportunities for Ukraine are clear from that. What about the risks? When facial recognition is controversial in this country, it's frequently because it's imperfect. It doesn't recognize dark tones of faces as well as light toned faces in many cases. People who might not have really committed a crime might become suspects in a crime through facial recognition technology, the way it's deployed, things like that.
In war, I could imagine there could be life and death consequences as well for the innocent. Even without that, we have instances that, of course, you both know about, where drone strikes sometimes by the United States are believed to be against military targets and wind up killing a lot of innocent civilians. What about the risks of facial recognition in war in these or other respects, Gregory?
Gregory Allen: Well, in any case, the risks of adopting a technology depend upon the application that you're going to use it for. The risks of Netflix adopting AI in order to generate recommendations for what movie you should watch next are considerably lower stakes than the risks of adopting AI for safety-critical applications or use of force type applications. I am not aware of the Ukrainian military using facial recognition specifically in support of life and death uses such as targeting. They are obviously interested in using that as part of their domestic surveillance and security networks because they want to understand whether or not Russia is seeking to place espionage agents or other types of infiltrators.
Those are real risks, I would say, in terms of the adoption of this technology. The accuracy also depends upon the circumstances of its implementation. Is the photo of the individual very close up and very well lit or very far away in a very dark types of situation? Obviously, Ukraine is weighing the other side of the risk equation, which is their cities are falling. There are civilian populations being directly targeted by military. None of this is especially easy to make decisions on and obviously, the Ukrainian side is in life and death circumstances as well.
Brian Lehrer: Listeners, we can take a few questions if anybody has them about AI or artificial intelligence in Russia's war in Ukraine and beyond for Will Knight, senior writer for Wired covering artificial intelligence and Gregory Allen, director of the AI Governance Project at the Center for Strategic and International Studies, that policy research nonprofit. 212 433 WNYC, 433 9692 or tweet @BrianLehrer. Will, you reported that images on Twitter and Telegram that are coming from the invasion appear to show a lethal Russian drone in Ukraine, a weapon that utilizes artificial intelligence. Is this different than drones that many of our listeners have heard about being used in warfare for a fair number of years now?
Will Knight: Yes, it is different in the sense that these are much lower cost drones that can incorporate newer technologies such as some computer vision capabilities. To reiterate what Greg was saying, it isn't clear that this drone is using that to target individuals or to even target specific vehicles. However, the manufacturers boast that it can do that. This is the same kind of technology that's in your iPhone that if you search for a particular object or something, will identify that. These algorithms that are fed a lot of examples and can then recognize things are finding their way into these much lower cost drones.
They are fundamentally different. One of the things that we're seeing here is that Ukraine is being able to punch above its weight using some much cheaper technology including some of these drones. The reality is that the manufacturers are all working on projects that boast more and more artificial intelligence. The question is, when would people start using more of those capabilities to, for example, allow a drone to identify a particular target? Again, I should stress that I think that one of the real questions is how reliable it is. I know that a lot of people in military circles are particularly concerned about that.
Brian Lehrer: Are we talking about autonomous weapons systems that can be deployed with humans totally out of the equation? When we've talked about drones in the past being used by the US in Afghanistan, for example, there were at least drone operators back in the United States. Some of them, we've discussed on this show and there have been other media reports, have suffered tremendous amounts of stress and anxiety sitting in front of a computer screen somewhere in America deciding who's going to live and who dies based on what they're seeing, then they launch the drone from here and half a world away, it lands on some people, hopefully it's the right people or the right targets. Are we getting even beyond that where there's no human involved?
Will Knight: That's somewhat unclear. In the case of the US, absolutely not. The US said that it will always keep a person in the loop as it were. It's unclear, but it seems that some companies are developing technology with that capability. That does obviously raise a lot of questions. Maybe it would take the stress away from the person operating it but these algorithms are not entirely reliable at all and they can often work in opaque ways. Whether that is a good idea is very, very questionable.
One of the issues that we are going to come up against, I believe, is that if you have a lot of drones operating in a swarm at very high speeds, then it becomes more difficult for a person to be part of that loop. We're getting into some issues that are going to be new. It's important to remember that there's a lot of autonomy already in military systems, so it's not entirely new, but they are fundamentally different, for sure.
Brian Lehrer: Gregory, for you at the Center for Strategic and International Studies, do there need to be new laws of war developed or written that take into account the most modern technologies like some of the ones that we've been talking about to protect the innocents?
Gregory Allen: Sure. Here I should mention that I was previously the director of strategy and policy at the US Department of Defense's Joint Artificial Intelligence Center. These are the types of issues that I was wrestling with every day in that score. Here, I think the United States actually comes out pretty well in this story, which is to say, over a decade ago, was the first time that the United States adopted a policy specifically related to autonomy in weapons systems. Autonomy as a capability, it depends what function you are automating. Autopilots have been a part of aircraft for well over half a century to some greater or lesser extent.
What's really novel, at least in US policy, and what is identified in US policy, is the selection and engagement of targets without further intervention by a human operator. That's really where we see the novel capability. Actually, the US does have autonomous weapons systems in its inventory but they are generally exclusively defensive like think missile defense types of applications. They are not the sort of thing where, "Hey, in this crowded urban environment, are we going to have an autonomous capability that could actually be taking human lives?"
There are militaries around the world that are headed in that direction. Russia is among them. For example, there's an Israeli weapons manufacturer that makes a drone that can loiter or stay airborne for a long period of time without guidance by a human operator. Then if an enemy turns on a certain type of radar system that is in its library, only our enemies use these types of radar systems, it will actually hone in on and destroy the source of that radar transmission.
That capability is an autonomous weapons system, but it is not using machine learning, which is one of the subcategories of AI that's been responsible for most of the progress in facial recognition and all of these other sorts of interesting capabilities. I think the second aspect of this story that is really new, is something that Will mentioned briefly. The fact that commercial capabilities are so relevant to so many different types of military applications and they're doing so in a way where the cost and complexity of providing useful military capabilities has gone down.
In the wars in Iraq and Afghanistan, one of the things that happened over the decades of fighting there is the rise of commercial drones. Even though these are remotely piloted capabilities, because they were so cheap and they were so capable, they were giving actors, insurgents in Afghanistan, access to capabilities that would normally be out of their financial reach and their engineering sophistication reach, which is airborne reconnaissance. The Taliban was never in a position to invent their own drones, but the fact that they could just buy them in the commercial markets and have them be relevant to military capabilities is a really interesting transition.
The point about the use of Artificial Intelligence to intercept, automatically translate, automatically transcribe Russian military communications, that also is an example of commercial artificial intelligence capabilities really expanding the scope and capability of military actors who wouldn't normally have that. The NSA in the United States has had that type of capability but it does so with thousands and thousands of trained analysts and translators. AI is making a less perfect version of that capability, but it's making it available to a much broader range of actors.
Brian Lehrer: All right. Bottom line, Will, does AI benefit Russia or benefit Ukraine more in the current conflict?
Will Knight: I think the evidence suggests that it's benefiting Ukraine much more because just as there's a lot of hardware that's been given by the US, a lot of AI technology has been given by US and its allies to Ukraine and it certainly seems to be benefiting so far.
Brian Lehrer: Will Knight, senior writer for Wired covering artificial intelligence and Gregory Allen, director of the AI Governance Project at the Center for Strategic and International Studies. Thank you both for the conversation.
Will Knight: Thank you.
Gregory Allen: Thank you.
Copyright © 2022 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.