The Creator of ChatGPT on the Rise of Artificial Intelligence
[music]
David Remnick: Every technological revolution has frightened people, particularly people who've got something to lose. When Gutenberg began printing with movable type, religious and political authorities wondered how to confront a population that had new access to information and arguments to challenge their authority. It's not surprising that artificial intelligence is now causing grave concerns because it will affect every one of us.
Speaker 2: Perhaps the biggest nightmare is the looming new industrial revolution, the displacement of millions of workers, the loss of huge numbers of jobs. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment.
David Remnick: What is surprising is that some of the very same people who have been racing to develop AI now seem deeply alarmed at how far they've come. In March, not long after ChatGPT began captivating and terrifying us all at once, over a thousand technology experts signed an open letter calling for a six-month moratorium on certain AI research, and some of those experts say that unchecked AI could be as dangerous to our collective future as nuclear weaponry or pandemics. We're going to talk today about AI. How could it change the world and how concerned should we be? I'll start with Sam Altman, the CEO of OpenAI, the company that's been releasing evermore sophisticated versions of ChatGPT.
Years ago when the internet was in its earliest stages, we were surrounded or at least I felt surrounded by a sense of internet euphoria.
Anyone who raised doubts about it was considered a Luddite or ignorant or a charmingly fearful person passed his sell-by date. Now, with the rise of AI, we're hearing alarm from many quarters. What I want to try to accomplish here is to have a rational discussion that at once gives a factual picture of where we are, where you think we're going, and at the same time airs out the concerns. Let's just start with the most basic thing. You've been working on AI for nearly a decade. How did you get into it and what were your expectations?
Sam Altman: This was what I wanted to work on from when I was a little kid. I was a very nerdy kid. I was very into sci-fi. I never dreamed I'd actually get to work on it, but then I went to school and studied AI. One of the memorable things I was told was the only surefire way to have a bad career in AI is to work on neural networks. We have all these ideas, but this is the one that we've proven doesn't work. In 2012, there was a paper put out. One of the authors was my co-founder, Ilya Sutskever, which was a neural network doing an amazing thing that performed extremely well in a competition to categorize images. That was amazing to me, given that I had assumed this thing wasn't going to work. After that, a company called DeepMind did something with beating the world champion at Go.
The end of 2015, we started OpenAI. One of our first big projects was playing this computer game called Dota 2. I got to watch that neural network, that effort, that system grow up. Number one, we truly genuinely know part of their tricks had an algorithm that could learn and it got better with skill. It took us a while to discover this current paradigm of these large language models, but the fundamental insight and the fundamental algorithms were all right from the beginning of the company.
David Remnick: GPT suddenly appeared on the scene, and you have talked a lot about its potential, and at the same time, let's put it this way, you freaked a lot of people out. What do you see as its potential and do you understand why people are unnerved about it?
Sam Altman: First of all, even the parts that I don't agree with about what people are freaked out about, I empathize with to the degree we are successfully able to create a computer that can one day learn and perform new tasks like a human. Even if you don't believe in any of the sci-fi stories, you could still be freaked out about the level of change that this is going to bring society and the compressed timeframe in which that's going to happen.
David Remnick: Well, let's slow down for a second. What does this imply in the much broader sense about what change is coming down the road in concrete terms?
Sam Altman: I think it means that we all are going to have much more powerful tools that significantly increase what a person is capable of doing, but also raise the bar on what a person needs to do to be a productive member of society and contribute because these tools will do-- Eventually, they will augment us so powerfully that they'll change what one person or one small group of people can and do do.
David Remnick: A lot of writers I know have naturally gotten very interested in ChatGPT and they somehow think it's going to eliminate them. I have to admit, I've used your latest version of ChatGPT to try and emulate my writing and without being over-proud about it, it didn't. What came out was an encyclopedia entry with nouns that were subjects that I was interested in. Tell me where ChatGPT is in its development now. Should I basically pack it in in a couple of weeks when ChatGPT is all the better?
Sam Altman: We get excited about where things are, but we also try always to talk about the limitations and where things aren't. Maybe a future version of GPT will replace bad writers, but it's very hard for me, looking at it now. Every time I talk to someone like you, they say, "This is really not it."
David Remnick: You think we're being defensive?
Sam Altman: No, no, no. I think you're right. I think in the sweep of emotion about ChatGBT and this new world, it is so easy to say the writing is on the wall. There's going to be no role for humans. This thing is going to take over. I don't think that's going to be right. I don't think we are facing this total destruction of all human jobs in the very short term. I think it's difficult and important to balance that with the fact that some jobs are going to be totally replaced by this in the very short term.
David Remnick: What jobs do you think will get eliminated pretty quickly in your view?
Sam Altman: I think a lot of customer service jobs, a lot of data entry jobs get eliminated pretty quickly so this is maybe useful. The thing that you do right now where you go on some website and you're trying to return something and you chat with somebody sitting on the other side of a chatbot and they send you a label and blah, blah blah. That job, I think, gets eliminated. Also, the one where you call and talk to someone. That takes a little longer, but I think that job gets eliminated too. I don't think that most people won't work. I think for a bunch of reasons that would be unfulfilling to a lot of people. Some people won't work for sure. I think there are people in the world who don't want to work and get fulfillment in other ways, and that shouldn't be stigmatized either.
I think many people, let's say, want to create, want to do something that makes them feel useful, want to somehow contribute back to society. There will be new jobs or things that people think of as jobs that we today wouldn't think of as jobs in the same way that maybe what you do or what I do wouldn't have seemed like a job to somebody that was doing an actual hard physical job to survive. As the world gets richer and as we make technological progress, standards change, and what we consider work and necessity and a whole bunch of other things change too. I think that's going to happen again with AI.
David Remnick: I realize that some of this draws on your essay that was published a couple of years ago, Moore's Law for Everything. You suggest economic policies like a universal basic income, taxes on land and capital rather than on property and labor, and all of those things have proven impossibly difficult to pass even in the most modified form. How would they become popular in the future?
Sam Altman: I think this stuff is really difficult, but, A, that doesn't mean we shouldn't try. The way things that are outside of the Overton window eventually happen is more and more people talking about them over time. B, when the ground is shaking, I think is when you can make radical policy progress. I agree with you. Today, we still can't do this, but if AI stays on the trajectory that it might, perhaps in a few years, these don't seem so radical and we have massive GDP growth at a time where we have a lot of turmoil in the job market, maybe all this stuff is possible. The more time upfront we have for people to be studying ideas like this and contributing new ones, I think the better. I believe we have a real opportunity to shape that if you take something, a good that has been super expensive and limited and important and make that easy to access and extremely cheap, I believe that is mostly an equalizing force in the world. We're seeing that with ChatGPT. One of the things that we tried to design into this, and I think is an exciting part of this particular technological revolution is, anyone can use it. Kids can use it, old people can use it, people that don't have familiarity with technology can use it. You can have a very cheap, cheap mobile device that doesn't have much power and still get as much benefit out of this as someone with the best computing system in the world. My dream is that we figure out a way to let the governance of these systems, the benefits they generate, and the access to them be equally spread across every person on earth.
David Remnick: This is the New Yorker Radio Hour. I'm talking today with Sam Altman, the CEO of OpenAI, which developed ChatGPT and GPT-4. Sam, talk to me about artificial general intelligence, which seems to be a step even past what we've been talking about.
Sam Altman: I think it's a very blurry line. I think artificial general intelligence means to people, very powerful artificial intelligence. It's shorthand for that. My personal definition is systems that can really dramatically impact the rate that humans make scientific progress or that society makes scientific progress. Other people use a definition like systems that can do half of the current economically valuable human work. Others use a system that can learn new things on its own.
David Remnick: That latter point is the thing that creates anxiety, isn't it? That it's a system that can operate beyond the bounds of human influence?
Sam Altman: Well, there's two versions of that. There's one that causes a lot of anxiety, even to me, and there's one that doesn't. The one that doesn't, and the one that I think is going to happen is not where an AI is off, writing its own code and changing its architecture and things like that, but that if you ask an AI a question that it doesn't know the answer to, it can go do what a human would do, say, "Hey, I don't know that. I'm going to go read books. I'm going to go call smart people. I'm going to go have some conversations. I'm going to think harder. I'm going to have some new knowledge stored in my neural network." That feels fine to me. Definitely, the one where it's off, writing its own code and changing its architecture, very scary to me.
David Remnick: AI systems have already generated skills that its creators didn't expect or prepare for. Learning languages it wasn't programmed to learn, figuring out how to code, for example. The worry is that AI could break free from its human overseers and wreak havoc of one kind or another.
Sam Altman: The fundamental place that I find myself getting tripped up in thinking about this and that I've noticed, and others, too, is this a tool or is this a creature? I think it's so--
David Remnick: That's fair.
Sam Altman: [crosstalk] to project cretinous onto this because it has language and because that feels so anthropomorphic. What this system is, is a system that takes in some text, does some complicated statistics on it, and puts out some more text. Amazing emergent behavior can happen from that, as we've seen, that can significantly influence a person's thinking, and we need a lot of constraints on that. I don't believe we're on a path to build a creature here. Now, humans can really misuse the tool in very big ways. I worry a lot about that much more than I worry about currently, the Sci-Fi-esque stuff of this thing. It wakes up and loses control.
David Remnick: Sam, you've had quite a few conversations lately with lawmakers. You testified in front of a Senate subcommittee, and that was widely reported. Before that, you had a private meeting at the White House. Tell me who was there and what was the conversation about?
Sam Altman: It was a number of people from the administration, led by Vice President Harris, and then the CEOs of four tech and AI companies. The conversation was about as we head into this technological revolution, what can the companies do to help ensure that it's a good change and help reassure people that we're going to get the things right that we're able to get right and that we need to in the short term? Then what can the government do? What are the kinds of policy ideas that might make sense as this technology develops? One area in particular that I am worried about in the short term is prominence of generated content. We've got an election next year. The already image generation is incredibly good. Audio generation is getting very good. Video generation will take a little bit longer but will get good too.
I'm confident that we as a society with enough time can adapt to that. When Photoshop came out, people were really tricked for a little while, and pretty quickly, people learned to be skeptical of images, and people would say, "Oh, that's photoshopped, or that's doctored, or whatever." I'm confident we can do it again. We also have a different playing field now. There's Twitter and these telegram groups and whoever else this stuff spreads. There's a lot of regulation that could work. There's technical efforts, like watermarking images, or shipping detectors that could work in addition to just requiring people to disclose generated content. Then there's education of the public about, "You've got to watch out for this."
David Remnick: Ultimately, who do you think was the most powerful people in the room, the people on the government side or the people heading the tech companies?
Sam Altman: That's an interesting question. I think the government certainly is more powerful here and even the medium term, but the government does take a little bit longer to get things done. I think it's important that the companies independently do the right thing in the very short term.
David Remnick: You understand that, again, years ago, tech, and the cover of wired-- There was a euphoria attached to technology that the past several years--
Sam Altman: Doesn't feel like it this time around, does it?
David Remnick: No, it doesn't feel that way at all. Not because I relish it, but the public images of places like Facebook, and Google are not what they were and I think, trust in those companies to get things right. When we hear about a conversation at the White House between the vice president and her colleagues and the heads of tech companies, we want to intensely know, what is going on, what the conversation is like, and what it's leading toward. Who's in charge? It would be really good to know the details of that.
Sam Altman: The right answer here very clearly is for the government to be in charge, and not just our government. I think this is one of these places where-- I realized how naive this sounds and how difficult it's going to be. We need international cooperation. The example that I've been using recently, is I think we will need something like the IAEA that we had for nuclear for this, and whos going to--
David Remnick: Which controls atomic weapons, obviously, and atomic energy.
Sam Altman: I think that's so difficult to do. It requires international cooperation between superpowers that don't get along so well right now. That's what I think the right long-term solution is, given how powerful this technology can grow. I'm actually optimistic that it's technically possible to do. I think the way this technology works, the number of GPUs that are required, the small number of people that can make them, and the controls that could be imposed on them to say nothing of the energy requirements for these systems, it is possible to internationally regulate this. I think the government has got to lead the way here. I think we need serious regulation from the government setting the rules. I think it's good for the tech companies to provide input, say where we think the technology is going, what might work technically and what won't, but the government, and really the people of the world have got to decide.
David Remnick: Sam Altman, thank you very much.
Sam Altman: Thank you.
[music]
David Remnick: Sam Altman is the CEO of OpenAI, which created ChatGPT.
[music]
[00:18:55] [END OF AUDIO]
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.