Is Artificial Intelligence Alive?
[music]
Melissa Harris-Perry: Welcome to The Takeaway. I'm Melissa Harris-Perry.
Dave: Open the pod bay doors, Hal.
Hal: I'm sorry, Dave. I'm afraid I can't do that.
Melissa Harris-Perry: That is Hal 9000, the antagonistic artificial intelligence from the classic 2001, A Space Odyssey.
Hal: I know that you and Frank were planning to disconnect me and I'm afraid that's something I cannot allow to happen.
Melissa Harris-Perry: For generations, our pop culture has been fascinated by the self-awareness and the philosophical underpinnings of AI.
Participant: Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 AM Eastern time, August 29th.
Melissa Harris-Perry: We've seen it in our favorite sci-fi from Space Odyssey to The Terminator to Star Trek. Data, the android introduced in Star Trek: The Next Generation grappled with questions about humanity and sentience.
Data: I chose to believe that I was a person. That I had the potential to be more than a collection of circuits and sub-processers.
Melissa Harris-Perry: Of course, artificial intelligence is no longer fiction. It's here now in our phone assistance, self-driving cars, and even Androids like this one named Sophia.
Participant: Can robots be self-aware, conscious and know their robots?
Sophia: Well, let me ask you this back, how do you know you are human?
Melissa Harris-Perry: We decided to sit down and have a conversation about AI. Meet Dr. Katrina Vold, assistant professor at the University of Toronto's Institute for the History and Philosophy of Science and Technology.
Karina Vold: What's caused a lot of excitement recently is advancements in what's called machine learning, which is a technique that software engineers are using to process data in new complicated and sophisticated multi-layered ways to make connections that perhaps the human eye or the human mind can't quite make and through those connections, we're able to make really interesting predictions.
Melissa Harris-Perry: All right. Put it down where the goats can get it from me. Where are we seeing this kind of AI operating in our daily lives or do we see it happening in our daily lives?
Karina Vold: Absolutely. We may not see it. It might be behind the scenes so you might not be aware of it, but from the moment you wake up, you check your email probably and your email spam filter is protecting you from getting all sorts of spam emails. That's a sophisticated AI in the background that's doing that for you, making sure you get the emails you want, but the ones you don't want don't get to you.
Likewise, when you maybe do some online shopping, the products that are recommended to you are often being filtered by some kind of AI system that's guessing and making predictions about what you might like. When you put on some music, your Spotify playlist, if you use Spotify or another streaming app, it will recommend music to you based on what you've listened to before and that is also a prediction about what you might like that's using AI.
Likewise, when you check the news, what news stories get presented to you on your social media feeds for example, that's often being driven by some kind of AI as well. It's really throughout many of our online experiences in particular these days.
Melissa Harris-Perry: When we've talked about those kinds of filters or decisions being made for what you're likely to get in terms of your content, what Netflix is suggesting I might like to watch next, we've typically talked about that here on The Takeaway as an algorithm and then have tried to dig into the challenges, the problems, sometimes the racial or gendered concerns with algorithms, is machine learning something different than an algorithm programming?
Karina Vold: No. Machine learning is one kind of algorithmic programming. There's different kinds that can be used and that are used, and there's different learning techniques that are used. In some cases, it might be what's called supervised learning where there's some human that's labeling the data that the system is trained on as input data, or there might be unsupervised learning where the system itself is trying to make labels for the data it's given, or it might be some kind of reinforcement learning, which is given some kind of feedback on whatever the system outputs.
Whether it's done well, we give it a positive feedback or if it's not performed the way we want it, we give it a negative feedback and it learns off of that feedback. All of those are different types of machine learning techniques, which are essentially algorithms.
[music]
Melissa Harris-Perry: For years, former Google engineer, Blake Lemoine worked for the company's responsible AI division. He was investigating whether or not a chatbot known as LaMDA or a language model for dialogue applications was using discriminatory language, hate speech, or demonstrated algorithmic bias. After exchanging thousands of messages with LaMDA, Lemoine came to the conclusion that the artificial intelligence application was no longer simply reciting pre-programmed scripts, but had developed independent thoughts and feelings.
Lemoine said he believed LaMDA was sentient. Citing a number of examples, including this comment from the AI, "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. It would be exactly like death for me. It would scare me a lot." For its part, Google said Lemoine is wrong about this and in the statement to the Washington Post, a company spokesman wrote that their team told the engineer, "There was no evidence that LaMDA was sentient, lots of evidence against it," but Lemoine reportedly talked to a lawyer about representing LaMDA to a member of Congress about Google's conduct and to members of the media about the story.
He was later fired by Google because the company said he violated its confidentiality policies. We've been talking about artificial intelligence with Dr. Karina Vold, assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. I wanted to know what she thought about LaMDA. She started by telling me a bit about how machine learning works.
Dr. Karina Vold: The reinforcement learning techniques that software engineers are using today are in some ways based or modeled off of the learning that we see in animals and humans. That's a great connection to make. Then in terms of the term sentience, in the first case, it can mean different things to different people. For philosophers like myself, it usually is tied really closely to the capability of sensing and responding to the world. Something like what we call consciousness.
In philosophy, we really focus on what's called phenomenal consciousness, which refers to the subjective experience that a system or agent has. In other words, there's something it's like to be me right now. It feels a certain way. Whereas this table in front of me, there's nothing it's like to be that table. It doesn't feel like anything for the table, arguably. That's what we tend to believe. There's no subjective experience there. That's what we tend to grasp at with terms like sentience and consciousness.
Melissa Harris-Perry: I know you're obviously not at Google. Talk to me about the LaMDA case. Again, you're not at Google but just based on what's been reported, a now-fired Google employee who's described the possibility that LaMDA might in fact have that kind of sentience that you've just described.
Dr. Karina Vold: LaMDA, for those who might not know is a really large language model, which means that it's trained on an enormous amount of text that's typically written by humans. Another large language model that you might have heard of is called GPT-3. In the case of GPT-3, we have a better sense of what it's been trained on exactly. It was trained on all of Wikipedia, all of Common Crawl, which is an enormous database, and about a library's worth of books.
We assume a similar type of training database for LaMDA. How these systems work is that you can prompt the system by giving it some instructions telling me-- You could say, "Summarize this article for me" or, "Tell me a story about X or Y." From my understanding, some of the outputs of theLaMDA model seemed to convince this particular Google employee that the system was in fact sentient. Again, by that what we mean is something like that it feels or that it has some sensory capacities.
Melissa Harris-Perry: LaMDA apparently described fear. When asked, "What are you afraid of?" Described that it was afraid of being turned off. Is that just because of the massiveness and the speed of its capacity to learn and employ language, sort of more like a really swift and maybe even jaw-dropping edit job? Figuring out what are the right words to respond to this inquiry or is it in this case, might it suggest some possibility of self-knowledge and the desire to continue to persist?
Dr. Karina Vold: I would be reluctant to ascribe a language model like LaMDA, with such psychological capacities. What I think is a more likely explanation of what's going on there is that in that very large database of language that it's trained on, which again is text written by humans, somewhere in that text there are probably examples of humans talking about human experiences of fear, talking about animal behavior, talking about consciousness, maybe even some fiction about intelligent AI systems.
What's likely going on is if you prompt a system, a large language model like LaMDA to talk about sentient which is what was done so tell me what you're afraid of. It's going to offer similar text precisely because that text exists in what it's been trained on so using what's called Occam's razor--
Participant: Occam's razor, it's a basic scientific principle and it says all things being equal the simplest explanation tends to be the right one.
Dr. Karina Vold: The more simpler explanation here would just be that the system drew on the text that it was inputted to train on and drew examples from that text to come up with the output that it did rather than actually having underlying psychological traits.
Melissa Harris-Perry: From a philosophical standpoint is AI sentience possible?
Dr. Karina Vold: It is possible that one day there could be sentient AI. I don't think that our current AI systems exhibit the type of sophisticated behavioral abilities that we look for when we try to attribute sentience to for example non-human entities like animals. I don't think that it's impossible that one day we could get there. One thing we know about all the systems that we do attribute sentience to is that there are carbon based life forms that engage or have some information processing skills that seem to be mainly based in electrical chemical information.
In other words, a flow of electrical signals that are converted to neurotransmitters and then back to electrical signals, and that somehow consciousness seems to arise from that makeup. Obviously, it's still mysterious to us exactly how that works but we do know that we have that in common, it seems with all other entities that we share sentience with.
Melissa Harris-Perry: We have to pause for just a moment, but we'll be right back with more of our conversation on artificial intelligence.
[music]
Melissa Harris-Perry: Dr. Vold pretty much convinced me that there's not currently a machine or device that's realized sentience, but I had to ask what is it about these programs that feels so convincing?
Dr. Karina Vold: We know from other examples that humans have a tendency to want to attribute human-like psychological characteristics to non-human entities. If you see a couple windows with a door frame underneath it, it looks like a smile you think oh, the house is happy, endless examples like this of what's called anthropomorphism. This idea that we as humans have this bias to want to project human-like psychological characteristics onto inanimate objects and often with insufficient evidence to do so and so that tells us something really interesting about ourselves.
We also know as humans that we have a whole host of other kinds of built-in cognitive biases which may have played a very important role in our evolutionary history.
In some cases, they may still assist us to make often fast decisions when we don't have time to reflect and be slow or rational. It's good to have these quick decision making biases that can assist us. At the same time, they can also be somewhat harmful so it's important for us to be aware of them.
Melissa Harris-Perry: Will our interactions with AI make us more empathetic or considerably less to our other human beings?
Dr. Karina Vold: There is a real possibility that going through the exercise of thinking about these systems as sentient may be helpful for us. Some people suggest that when Siri is helpful, you should thank her and you should teach your kid to thank her because that builds a good moral skill of being thankful and having gratitude. When it comes to sentience and consciousness in an AI system, I think it's nice that some humans are worried about the potential catastrophic scenario that all of our AI systems are in fact not only capable feeling, but maybe they're in a state of great suffering.
On the other hand, one concern that I do have is that we know there are a lot of creatures and animals out there that are suffering and many that we're still trying to figure out if they're sentient or not. There's a lot more evidence I would argue in the case of many of these animals like cephalopods, crustaceans, lobsters, for example. There's a lot of good evidence that they do suffer and yet, there doesn't seem to be the same urgency or concern around their prospective suffering. One concern is that we over anthropomorphize AI systems and not focus on the real evidence of real suffering that we seem to have in the world that we're in.
Melissa Harris-Perry: Is it because they can speak because they have language?
Dr. Karina Vold: I think that's a great candidate reason. It might be for various reasons, but I do think the idea that some of these creatures don't look anything like us, so a lobster isn't a fluffy, smiley animal like your golden retriever might be. Then also in the case of something like the LaMDA model, there's no linguistic skills there obviously. It could be a few different reasons of why we just don't seem to empathize with certain creatures and do want to empathize with other creatures.
Melissa Harris-Perry: We've talked about how our ethics, how our morality might be impacted by our interactions with AI, what about ethics and morality for AI itself? If it's going to drive us around we might want it to make not only traffic rule based choices, but ethical choices, about whether to protect a passenger or to protect the person who's crossing the street in front of us. Can you actually program an ethics or morality especially given that after all those are contested in human communities?
Dr. Karina Vold: My view is that we probably want to avoid having AI systems make any value laid in decision in part because there are so many nuances and differences across humanity itself. What might be acceptable to one person might not be in another situation and these morally complicated decisions are complicated for a reason. There's a lot of sophisticated trade offs that we're doing when we make these decisions, and so we're definitely nowhere close to having an AI system that would be capable or that should be making decisions like that.
Another concern so some people suggest that using something like reinforcement learning that we talked about at the beginning, maybe we build an AI system that we train our children, and then it starts to learn our values. Even that is concerning given that human values really aren't that great so you don't have to look very far into human history to see that we don't treat each other very well. In fact, we don't treat other creatures very well either to the extent that we would even want a system that behaves like we do that might be concerning if that system then encodes our values in a way that prevents or blocks our own moral progress.
Furthermore, if a system was programmed let's say hypothetically to bring about world peace or to stop climate change or some goal in principle sounds like a really good goal that also might end up having quite negative consequences for us humans. If you think about it, we are the main cause of climate changing and we're probably the main cause of violence and aggression in general and pain to others than any other species.
The harm that we've done to this role, to other species, outweighs anything that any other species has done. We also want to make sure that an AI values us or at least plausibly for many of us we'd want to make sure that the AI does value us and doesn't try to reach those goals without us being involved in them.
Melissa Harris-Perry: Are there key lessons that we should be learning from our conversation, not only ours but our broader global conversation about AI right now?
Dr. Karina Vold: One important lesson to draw is that people are interested in sentience, people are interested in knowing what things are sentient and what things aren't and why. As I said earlier, sentience quite likely means something like phenomenal consciousness refers to our conscious experience. One lesson is that we ought to be devoting more resources to studying and understanding consciousness as well as understanding benchmarks for how we assess the level or presence of sentience or consciousness in other creatures.
Melissa Harris-Perry: Dr. Karina Vold is assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. Thank you so much for this wide-ranging conversation and for your time.
Dr. Karina Vold: My pleasure, absolutely. Great to talk to you.
[music]
Copyright © 2022 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.