What ChatGPT Says About Our Future with AI
[Music]
Alison Stewart: We're going to chat about ChatGPT, the artificial intelligence tool that some claim will change the way we do everything. ChatGPT can create content with a few words, ideas, and directions typed into one of those skinny search boxes. For example, the producer of this segment, just for research purposes, of course, asked ChatGPT itself to set up this conversation. Zack, the flesh and blood human, typed this into the software, "Write a radio talk show introduction around the topic of ChatGPT and artificial intelligence with guest New York Times technology columnist, Kevin Roose."
Here's what our newest producer ChatGPT, spit back, "Welcome to the show. I'm your host, and today, we're talking about the latest developments in artificial intelligence and language models. Joining us today is Kevin Roose, technology columnist for the New York Times, and an expert on the topic of AI and its impact on society. Kevin has recently written several articles on the use of ChatGPT, a Language model developed by OpenAI in education and on social media. We'll be discussing the potential benefits and challenges of using ChatGPT as well as the broader implication of AI on our world.
Stay tuned for an interesting and informative conversation with Kevin Roose." ChatGPT gets the basics, but in my humble opinion, lacks a little nuance and context. That's where producers come in. A producer and I will tell you more about the interview to set it up. For example, identifying that one of the major concerns with this technology is that people will use it to cheat in school and the claim the software's work as its own. We'd also set up the fact that there are intellectual property concerns, given that AI learns to do what it does from actual human speech and writing, with no credit given to the creators of the original texts it learns from.
We'd introduce our guest, Kevin Roose from New York Times with a little bit more fanfare, frankly. He is tech columnist, whose column is called The Shift. He's got a book called Futureproof, and he hosts a podcast called Hard Fork, which recently aired an episode called Can ChatGPT Make This Podcast? We'll talk to him about that, about his writings, and about artificial intelligence in the classroom and how it's being used in deeply weird ways in our culture. Welcome, Kevin.
Kevin Roose: Thanks for having me.
Alison Stewart: Listeners, we want--
Kevin Roose: Thank you for both of those introductions, by the way. Both the ChatGPT one and the human one. I appreciate them both.
Alison Stewart: Listeners, we want to bring you in on the conversation. What are your questions or concerns about these AI tools? Give us a call, 212-433 WNYC 212-433-9692. If you've ever tried ChatGPT yourself and you want to report to us on that experience, we'd love to hear. Maybe it's been helpful to you in your work or maybe it's just been fun to see what it spits out in response to your prompts. Tell us how you feel about this technology after having tried it. 212-433-9692, 212-43-3 WNYC. Social media's available as well @allofitwnyc. Kevin, for the uninitiated, what is ChatGPT?
Kevin Roose: ChatGPT is what's called a large language model. It was developed by OpenAI, which is a company here in San Francisco where I live. It is basically a chatbot that takes simple text prompts and responds in very realistic ways. It's capable of doing a wide variety of things. You can tell it to write you a poem or even complete a piece of computer code. It can do that instantaneously. It's free and available to use. It's really different from the chatbots that we've seen before.
People have probably interacted with things like customer service chatbots when you're talking with an airline or something. Those chatbots are usually very limited. They can't answer many different types of questions, and frankly, they're usually not very good. ChatGPT by contrast is very versatile. It can do lots of different kinds of things and it can do them with a shocking degree of competence a lot of the time. Some of the first people I saw talking about ChatGPT were professors who were giving ChatGPT their tests and exams and reporting back that they were not only getting passing grades but in some cases, getting A's.
Alison Stewart: Although while we're going to talk mainly about ChatGPT, there are a lot of other ways to use AI. There's DALL·E 2. On Instagram, there's 10 AI writing tools somebody posted that you could use as well to help you along the way. What is it about ChatGPT that seems to be attracting all this hype?
Kevin Roose: It's a great question, and it's especially confusing because the technology that powers ChatGPT which is something called GPT-3.5 is not actually new. It's been around in various forms for several years now. I think what changed things just a few weeks ago is that open AI built a very simple, easy-to-use chat interface, and you didn't need to know how to code or you build your own program to access it. You could just go in, sign up for an account and start typing and it would start typing back. I think for a lot of people, the ease and the simplicity of the interface was what drew them in.
Then people just started experimenting and giving it all kinds of different prompts and seeing the results, some of which were bizarre and off-putting, but some were quite good.
Alison Stewart: What might we see this used for if we really think about it as just a tool?
Kevin Roose: The possibilities are really endless. Some of what I've seen people using it for, students using it as a personalized tutor saying to ChatGPT, "Explain the Doppler effect to me at a high school reading level." I've seen actually doctors who are using it to write letters to insurance companies detailing why patients should get their care. I've seen companies using it for copywriting and marketing. I've even seen people creating games with it. Of course, then there's this whole capability that it has of writing code. If you want to build an app to do something, you don't necessarily even need to learn how to code anymore. You can just ask ChatGPT to build it for you. In some cases, it'll actually work.
Alison Stewart: Let's take a few calls. Mitch is calling from Paramus. Hi Mitch, thanks for calling All Of It.
Mitch: Hi, thanks for having me, and this is a great conversation. I'm using it for two things. Not one thing. I'm using it for one thing, definitely for another. I'm a freelance crypto journalist, and when I first messed around with ChatGPT, I was very tempted to use it to try to write an article that I'm working on about Twitter and the crypto influencers on Twitter. It actually didn't write a half-bad, very basic 1,500-word article about that. I just thought I would get fired if I actually used it. I don't know if that's okay, but what I am using it for is I'm writing a novel with it.
I have tried to write a novel using Scribner, which is a typical writer's app, software and it's pretty good. What ChatGPT does is it doesn't forget. Whatever we've talked about seven days ago that I don't remember as I'm planning and brainstorming the novel, ChatGPT remembers everything. All the ideas that-- and keeps helping me collate all these ideas, all these memories, all these thoughts and all-- I'm actually having a conversation with this thing and we're writing a book together. I feel like if we sell it and we get rich, I'm going to owe ChatGPT a royalty on because we're doing it. It is working. It's remarkable. That's about all I can say about it
Alison Stewart: Mitch, thank you for calling in. Mitch's language was really interesting, Kevin. He was an anthropomorphizing ChatGPT saying, "I'm going to have to share royalties with it, him, it, they [chuckles]. I'm curious about the way humans interact with something that seems humanish.
Kevin Roose: That's been really fascinating to watch. I've been using ChatGPT for weeks now for all kinds of things. Mostly just testing it out, but also I've started using it to help me think through things in my personal life, conversations I need to have with people. Maybe arguments I need to have. Using it as practice, saying "You simulate the role of my friend who I need to say this thing to, and I'm going to type to you and I want you to type me back." Using it as a sparring partner or a confidant. It's really quite good at that if you can learn how to use it that way.
It's not perfect. We should spend some time talking about the drawbacks. It is interesting how easily humans start trusting these systems when they start giving us good answers.
Alison Stewart: Let's talk to Emily from Westchester. Hi Emily. Thanks for calling All Of It. You're on the air with Kevin Roose, technology columns for the New York Times
.
Emily: Hi, my question is, as a parent with a child going into high school, how concerned do I need to be? How accessible is this technology for the average high school freshman? Is it something that parents have to start putting parent controls on so that their child doesn't start accessing it?
Alison Stewart: You bring up the elephant in the room in the ChatGPT room, kids using it. You think about this, your column was called Don't Ban ChatGPT in Schools, Teach With It. OpenAI's new chatbot is raising fears of cheating on homework, but its potential as an educational tool outweighs its risks. Kevin, what are the real-world fears that you've heard from teachers and from parents about ChatGPT?
Kevin Roose: The big one is obviously cheating. There's no reason that students at high school or any level of education really can't if the essay assignment in class is write a five-paragraph essay about symbolism in Huckleberry Finn or something like that, ChatGPT can do that kind of thing really well and it's very easy and it's a lot easier than writing it yourself. Students are already using it to do that. I've spoken to some who have said, they've used it to do all their homework for them. There are two ways that we could respond as parents and teachers and principals.
You could ban it, and say, "No one's allowed to use this," which I think is probably unlikely to work, just because kids are smart and they can find ways around parental controls. This type of technology is not going away. There are going to be many more chatbots, like this. The second way that we can react to it is by inviting it into the educational process, treating it like calculators, for example, as just something that we allow students to use that we teach them how to use and that we don't always allow them to use. There will still be in-class assignments and in-class exams to evaluate how much students are actually learning.
I think that second way is much more reasonable and actually much more helpful to students because these things are not going away, they're going to be part of our society for a long time. Students are going to need to know how they work and what their drawbacks and limitations are, but also how they can use them to become more efficient and to work alongside them in the workplace someday. I think we should be not trying to ban this technology, but trying to figure out how to use it safely and effectively.
Alison Stewart: Can I follow up for a question that Emily put forth? Can parents do anything about it at this point?
Kevin Roose: Sure. There are parental controls that one could put on ChatGPT. You could block it from your kids' devices or things like that. As a former kid whose parents tried to do things like versions of that in my day, I can tell you that nothing is going to make your kid want to use ChatGPT more than parents and teachers banning them from using it.
Alison Stewart: Let's talk to Steven from Highland Park, who is a teacher. Hi, Steven, thanks for calling in.
Steven: Hi. I'm a math instructor and whenever I teach my students about word problems, I always make this point about, I can have a computer solve equations but looking at a word problem and reading it and translating it into equations, that's a very human activity. I can't really tell a computer how to do that. It's hard to teach. Am I wrong now? Could I actually scan a word problem into a computer and they can look at it and solve it now?
Alison Stewart: What do you think, Kevin?
Kevin Roose: You should try it. Try it. The interesting thing about ChatGPT is normally we think of AI systems as being very good at computational logic because they're computers after all but these AI systems, these large language models are actually weak at computational and logic things. Sometimes, if you ask it to do say a simple math problem, it'll get that math problem wrong because of the way that the systems are built. All they are doing is basically predicting the next word in a sequence. They are not actually doing any computation.
I've heard math teachers saying that they actually feel less threatened by this than some of their colleagues in say English because it's better at writing essays than solving math problems for now. I should say that could change with the next version of this. I wouldn't count on those word problems being ChatGPT proof forever, but I would just try it out and see how good it is at responding to them. Sometimes it makes mistakes and maybe that's an indication that you can continue giving these kinds of problems for now.
Alison Stewart: There's the case that everybody keeps citing that ChatGPT was asked whether the number 47 is larger than the number 64 and it said yes, then it just kept asked to count up from 64 to 47 and it started counting, and just kept counting and counting and counting.
[laughter]
I don't know if you can explain what went wrong there or what does that tell us about ChatGPT that that happened?
Kevin Roose: As I said, ChatGPT, when you ask it to do a computation, what it is not doing is what a calculator does. It's not going in and saying, "Multiply this number by this number." Instead, it's basically taking all of these examples of texts that have been used to train it, which basically imagine billions of web pages getting thrown into a blender and then producing this model that you can then use to predict the next word in the sequence. It's like a super-powered version of the autocomplete feature on your iPhone, for example.
Because of that quality, when it's confronted with something like a math problem, it tries to complete that math problem in the same way that it would complete a sentence in an essay about Huckleberry Finn. It's not doing the computation that you might expect from an AI model. It's doing something very different and that produces sometimes some very bizarre answers.
Alison Stewart: My guest is Kevin Roose, technology columnist for the New York Times. He's the author of Futureproof and hosts the podcast Hard Fork. We are talking about ChatGPT. Listeners, if you have questions or concerns about these AI tools, we'd love to hear them. 212-433 WNYC 212-433-9692. If you've ever used ChatGPT yourself, we'd love to hear about your experience. 212-433 WNYC. You can also reach out to us on social media @allofitwnyc. We're going to get a couple of calls lined up and then we'll talk a little bit more about ChatGPT with Kevin Roose after a quick break. Stay with us.
[music]
You're listening to All Of It. I'm Alison Stewart, with me is Kevin Roose, technology columnist for the New York Times. We are talking about ChatGPT, as well as some other AI. You're my guests as well, let's take a few calls. Sunny is calling from New Mexico. Hi, Sunny.
Sunny: Good morning. Thank you for taking my call. Mine is the concern like when you go to a grocery store and you read the ingredient label and they'll state if there's bio-engineered food products in this food and so I think it should say, "This written document has been artificially intelligent engineered.' I think that is most pertinent, especially in developing advertisements to target certain consumers.
Alison Stewart: Sunny, thanks for calling in. We've had a couple of calls asking about can it be digitally watermarked in some way? Is there some way that we can identify that something has been created by AI?
Kevin Roose: Yes, there are some companies that are working on tools to do things like watermarking AI-generated text, for example, so that teachers or whoever could see this text was produced by an algorithm. They're not going to be perfect. I just want to set people's expectations. There are definitely going to be ways around that. I do think it's a good idea just in general to disclose when something that you're-- to not pass AI-generated work off as human work. I think that's just a good business practice and an ethical practice. I don't expect that everyone's going to follow that just like people who make art or do graphic design don't always put the name of the app that they use.
It doesn't always say, "Created with Photoshop," or whatever. I don't expect that people will be saying all the time that this was created by ChatGPT or any of these other apps.
Alison Stewart: Stefan is calling in from Linden, New Jersey. Hi, Stefan. Thanks for calling All Of It.
Stefan: Hi. How are you?
Alison Stewart: Doing well.
Stefan: I do have a question because one of the callers mentioned that ChatGPT remembers everything. I'm not a teacher but let's say I'm a teacher and I want to check if the essay is from ChatGPT or from my student. Can I input that essay to Chat and ask, "Did you write it?"
Alison Stewart: Very good question. I like that one.
Kevin Roose: Good question. I don't think you can, no. The reason is because while it does remember everything that's been typed to it, it remembers that for specific users. For example, if I am logged in and I have a conversation with it asking for, I don't know, gardening advice, I could come back a week later and have that same conversation about gardening advice but obviously, for privacy reasons, you wouldn't want everyone else to be able to log in and say, "What did Kevin talk with you about last week about gardening?" And have it spit out a transcript of our conversation.
No, right now, ChatGPT does not allow you to ask whether something that it's written was written by ChatGPT or not.
Alison Stewart: Is there something that can detect or do educators have at their disposal any tools to detect if a essay has been written by ChatGPT?
Kevin Roose: There are a couple of tools that have been created in the last few weeks that some teachers I've talked to are very hopeful about. There's something called GPTZero, which was created by a Princeton student, which claims to be able to spot AI-generated text. It's not perfect, and it's relatively easy to fool it by just you take something that was written by ChatGPT and you put it back into ChatGPT or another AI-generated engine, and you say, "Paraphrase this using slightly different words," and then all of a sudden, it doesn't trigger the alarm.
It's not a perfect system and I think it's going to be very hard to detect this AI-generated content in the future. Which is why I think our strategy as a society should not be to try to pin our hopes on being able to detect this stuff, but instead should be trying to integrate it safely and ethically into our lives and our work.
Alison Stewart: This is from Twitter. I think it's an interesting comment. If it's free, then you are the product. Who is benefiting from the data harvested from ChatGPT?
Kevin Roose: Great question. It is free for the user. Right now, what we know and what we've been hearing is that it's actually costing OpenAI, quite a bit of money to operate this service, there's been some estimates that every time you type something into ChatGPT and it responds, it costs them between one and three cents in just processing power needed to run the computers on the back end. They're losing quite a bit of money on this right now and it appears that that's going to come in part from Microsoft, which has invested a billion dollars in open AI already and is looking reportedly to invest a lot more than that.
Right now, this is what they're calling a research demonstration that is probably not making any money and doesn't have ads on it or anything like that. In the future, there are all kinds of ways this could be monetized. You could charge people for access, you could sell subscriptions. You could also have ads on it, if you type into ChatGPT, "What's the best restaurant in my town?" Some restaurants could pay to be the thing that it says back in response. That's just a hypothetical, that's not actually happening that I know of right now but these will be monetized in the future. I imagine that some will be ad-supported, but others may be subscriptions.
Alison Stewart: Let's take Mike online because I think Mike is going to get to something that a lot of people are thinking about. Hi, Mike, thanks for calling in.
Mike: Hi, how are you doing? This may not be a good connection both two of us. Microsoft made a statement that they're laying off a lot of people, but they're not laying off computer coders, they're laying off people in marketing, and so on and so forth. Are they in this state league? Is there an implication that they don't think that this particular application that you're describing is ever going to produce computer code? If that's not a good question, could you think of an answer to this just because it could provide me with something interesting?
Alison Stewart: [chuckles] I think there's two things in that question and I like the way you ended that one, Mike, thanks a lot. The two interesting things in that question, which he was getting at is replacing people's jobs and computer code, I'll let you take the computer code, and then I'll follow up with the replacement of human beings as a concern.
Kevin Roose: The computer code issue is real. Right now computer programmers that I've talked to aren't worried about being replaced because this is just making them faster. It's not a failsafe replacement for programmers, you still need programmers to figure out what to ask the chatbot and to check the output and make sure it's right and make sure it all works. You still need human programmers for now, at a company like Microsoft, but I do think programmers are not immune. This idea that programmers will be the last people to be replaced by AI, I don't think that's true and I think we're seeing how that could actually happen.
Alison Stewart: Yesterday, I was on ChatGPT and I went to Twitter, and it said it was at capacity at one point yesterday. Why do you think it's grown quickly? That freaked out a lot of people.
Kevin Roose: It's at capacity a lot of the time I try. They're really having trouble keeping up with demand because it is so incredible and people are just amazed and people are relying on it for their jobs. This thing has only been around since late November and it's already millions of people have signed up to use it. I've been a tech columnist for almost 15 years and I've never seen a consumer product take off like this.
Alison Stewart: Should Google be looking over Google's shoulder?
Kevin Roose: They are. Google has declared an internal emergency over this. Called a Code Red. They are worried that ChatGPT because of how good some of its responses are, could actually eat into their search business that people might instead of going to Google to look something up, might just ask it to ChatGPT. Google has plenty of its own AI tools and researchers and right now they're scrambling to catch up.
Alison Stewart: Someone wants to know if ChatGPT is only in English.
Kevin Roose: I don't believe so. I believe it can do multiple languages. I haven't tried it in any other languages, but I assume others have.
Alison Stewart: I understand that you've prepared to do a little reading from ChatGPT, Kevin, that went viral. Can you tell us what the prompt was and then read us the product?
Kevin Roose: Sure, this wasn't my prompt but I saw this on Twitter and put it in an article and people seem to really like it. The prompt that someone asked ChatGPT was write a biblical verse in the style of the King James Bible, explaining how to remove a peanut butter sandwich from a VCR. ChatGPT thought for a second or two and then wrote the following back. I won't read the whole thing because it's very long, I'll just read the first couple of sentences.
It came to pass that a man was troubled by a peanut butter sandwich, for it had been placed within his VCR and he knew not how to remove it. He cried out to the Lord saying, "Oh, Lord, how can I remove this sandwich from my VCR for it is stuck fast and will not budge?" The Lord spoke unto him saying, "Fear not my child, for I shall guide thy hand and show thee the way." Then it proceeds to give instructions. It's pretty incredible.
Alison Stewart: Anything that you think our listeners should know about ChatGPT as they start to think about it, maybe you want to use it? What's one thing you'd like to leave them with?
Kevin Roose: I would say just go in and try it out. This may not stay free forever and as you said, it's at capacity a lot of the time but I really think that the best way to learn about these systems is not by listening to people like me talk about them or read newspaper articles about them. It's actually just to go in and start using them yourself. I would encourage people to check it out and to keep tabs on it because this is going to be a story that evolves and it changes over time.
Alison Stewart: Oh, I think you're really interesting to hear talk about it myself and I enjoy your writing quite a bit. Kevin Roose, technology columnist for the New York Times. Check out his podcast, Hard Fork. Thanks a lot for helping us out, Kevin.
Kevin Roose: Thanks for having me.
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.