How AI Like ChatGPT is Changing Education
Janae Pierre: You're listening to The Takeaway. I'm Janae Pierre, in for Melissa Harris-Perry.
[music]
What if we lived in a world where students writing essays became obsolete? That's the question facing instructors around the world amid the fast rise of ChatGPT.
Susan D'Agostino: ChatGPT, fundamentally, is a piece of software, but it's a piece of software that's unlike what we've seen before. It interacts with people in a conversational way. It writes poems, sonnets, essays, term papers, even programming code. It's a convincing debate partner in an unlimited number of subjects.
My name is Susan D'Agostino, and I'm the technology and innovation reporter at Inside Higher Ed.
Janae: The writing ability of ChatGPT is uncharted territory for many educators. Recently, the AI software was able to pass law exams in classes at the University of Minnesota, where instructors graded submissions from ChatGPT and human law students blindly. ChatGPT was able to perform at the average of a C+ law student. This may just be the beginning of a whole new AI-assisted education future. Google is reportedly getting ready to launch a rival platform to ChatGPT.
Should educators be fearful of this new dawn of AI-assisted classwork? Citing concerns around cheating and plagiarism, some school districts, including New York City's public schools, have banned the software, but when Susan sat down with Melissa Harris-Perry, she said that education's relationship with AI-powered technology like ChatGPT is a lot more complicated.
Susan: There's a mix right now of some fear, but it's also coupled with uncertainty and certainly also excitement. It can produce plausible college-level essays, and they earn passing grades and receive feedback on par with those that are written by humans, but I would say that there's also some excitement. It's not only about fear at this point.
Melissa Harris-Perry: Okay, it can write a passing essay. How do we know it can write a passing essay? Did it take a test?
Susan: [chuckles] Right now, the software is being offered free by OpenAI. People are going online and saying, hey, can you write an essay about this topic, and it actually produces original prose. It's not necessarily plagiarizing from anywhere. It's going out onto the internet, getting information, putting it together and producing the requested results. Whether that's an essay, or a term paper, or a short paragraph.
There have been faculty who have looked at these papers and have had trouble distinguishing between those that the machine wrote or those that their students would write. I've even spoken with faculty who have said, "It writes as well as I do."
Melissa: It is not plagiarizing. That is fascinating. I mean, the number of times I've needed to, at the start of a semester, engage with my students around the question of what constitutes plagiarism, what is fair use, how to cite sources. I guess I'm wondering how would we characterize what it means to ask an AI software to help you with your paper.
Susan: You're hitting on the question that faculty are reckoning with right now. There's a really vast gray area between acceptable and unacceptable uses of this evolving tech. We've all been using some version of artificial- or many of us have been using some version of artificial intelligence suggestions when we have used spell check and grammar check, or those little suggestions for replies in our emails, like, "Hey, thanks so much. Got the document."
Most people would say that's fine to use spell check or grammar check. Most people, on the other end, would say it's not okay to ask ChatGPT to produce an entire essay, and then submit it wholesale for credit without acknowledging that contribution. In between, there's some kind of collaboration that potentially can happen now between humans and machines, and faculty are trying to figure out how to guide their students in how to do this.
Melissa: As much as I'd love to believe that the world-- Obviously, you write for Inside Higher Ed. Our worlds revolve around what is happening in college classrooms in so many ways. I guess, even beyond that, there is this broader question to me of, is it just a matter of citing ChatGPT, of indicating that there is another intellectual producer here?
Susan: Yes. Many faculty are asking their students to acknowledge the contribution of ChatGPT in the event that they used it. Some are using it more- not necessarily in assessments, in what we all know as tests or exams, but some are using it to spark creativity. Maybe there are some students for whom writing is a struggle and who face a blank page with fear or dread. In those cases, some faculty are saying, "Hey, this can start the writing process for you." With that little nudge, then maybe the student is off to the races.
It's not just about citing that you used ChatGPT, but there also is often the question of, how much did ChatGPT contribute? It's typically citing, but also explaining the extent to which it was used.
Melissa: Such an interesting point. I suffer from enough writer's block to understand the dramatic difference between having a starting point versus facing that blank page. That blank page is brutal. Will it change what we think of as competence, as intellect? Not today or tomorrow, but in 10 years, will the capacity to build on this kind of technology, this kind of AI, be the thing that we are most looking for in a classroom and potentially in professional settings, or will there still be some value for those who can face the blank page and come up with that nugget, that kernel completely by themselves?
Susan: Melissa, you're getting to the heart of this; that this is not just about is a student cheating or is a student not cheating. This is very much about our humanness. Faculty are in a position right now to help steer this uncertain path that we're facing. I've spoken with some who firmly believe that they should be crafting assignments for their students that guide students in surpassing the AI.
In order to do that, some are saying, okay, start with the AI, with ChatGPT or other AI writing tools give you, and stand on its shoulders, go farther than it. There's another contingent of faculty who are saying that's a fool's errand. Don't do that. Don't have your students start with the AI. That lends far too much agency to the software. It imbues it with human attributes that they reject. They say it doesn't have that.
The problem I'm seeing when I talk with many faculty, and I've been talking with many at colleges across the country and even beyond, is that they're innovating in real-time. I have immense respect for what they're doing right now because there's very little guidance. The analogy I'm thinking about is that when the pandemic began, and there was the transition to emergency remote teaching, faculty had to innovate in real-time then as well. For faculty who had maybe spent decades teaching in-person, in the classroom, all of a sudden, their classes were online, and they had to figure it out with very little guidance.
Now, there's a lot of guidance even for faculty who may be in-person, but want to have some kind of hybrid component or use digital course materials. There's a lot more advice out there. Right now, the advice, it's coming from different sources. There's no consensus. They're just working with their students, in their classrooms, on the ground, real-time, trying to figure it out. I have enormous respect because it's definitely a moment to pay attention to. ChatGPT is a big deal.
Melissa: It's interesting, as you bring me back to spring of 2020, when we were all just like, "Good luck. That'll work out."
[laughter]
Susan: Yes, exactly. I think in the future-- I spoke with someone recently at the Association for Computing Machinery who was saying that faculty can fully expect, within a year or two, there probably will be more of a consensus about how to proceed, and there probably will be guides that are generally accepted, but right now that's all being worked out. My sense is that faculty who are trying to work with and not against the technology are doing a great service to their students.
It's not that these tools only exist in academic settings. They're going to be omnipresent in future workplaces. They're going to be embedded in search engines. Microsoft has just invested in ChatGPT, and expects to put it in its Word Processing Software and sell the product to other companies. There's going to be what Paul Fyfe at North Carolina State University calls synthetic disinformation in the wild. These AI writing tools may be creating disinformation and spreading it very rapidly, and helping students refine their awareness of the very subtle cues of what artificial prose may look like. Those students are going to be better equipped to face the challenges that not only challenge them in the workplace, but even in society.
Melissa: As a final question, as you were talking about your own academic background, you come out of a math background. I'm wondering, is this form of kind of linguistic AI simply the calculator, the humanities and social science calculator that math teachers, K-12 and higher Ed, had to deal with when technology really intervened in at least the simple computational aspects of what was being taught in classrooms?
Susan: I think this is bigger than the calculator. Writing can be persuasive. It's a form of communication that is different than what we do in math. At its base, a computation is either correct or incorrect. Even if you think-- That's the case in applied math, but even in pure math, where we write theoretical proofs, they are either true or they're not. There's no gray area in a proof. If something is called a proof, it is true.
An op-ed, for example, in a newspaper may have many truths, but it may also have spin, it may have some truths and some falsehoods. There's much more nuance when we think about human communication and language than when we think about the computations that a calculator can do. I think that the analogy works as a starting place, but I think that it's not complete.
Melissa: Susan D'Agostino is technology and innovation reporter for Inside Higher Ed. Susan, thanks so much for taking the time with us today.
Susan: Thank you so much, Melissa, and thanks for the work that you're doing.
[music]
Janae: We have to pause for a moment, but we'll be right back with more on how ChatGPT is influencing life outside the classroom after the break. This is The Takeaway.
[break]
You're listening to The Takeaway. I'm Janae Pierre, sitting in for MHP.
Automated Voice: Shall we play a game?
Janae: We've been hearing about how new innovations in artificial intelligence like ChatGPT are transforming learning inside classrooms, but some researchers are focused on how AI can transform learning outside the classroom, with the goal of informing policy changes and fostering more equitable outcomes for students.
Melissa Harris-Perry spoke with Nabeel Gillani last week. He's an Assistant Professor of Design and Data Analysis at Northeastern University, and Director of the Plural Connections Group.
Nabeel Gillani: A lot of the focus in how we do use AI in education is often in the classroom, and looking at things like content mastery and test prep, but we know that a lot of children's educational and life outcomes are shaped by what happens even before they get to the classroom, or what's happening in their broader schools and communities.
Our group is really interested in exploring some of those applications. How can we use AI to help, in particular, foster more equitable network connections between students and mentors, and between families and parents and other parents? Really, because we know from research that a lot of those cross-cutting network connections can have a dramatic impact on the trajectory and quality of lives of children and their families.
One of the ways we've been exploring that is really when we look at this issue of school segregation, and so we continue to have very racially and demographically segregated schools across the US. There's a lot of reasons for that. One of the reasons we've seen is because of how certain school districts will determine student assignment policies to schools. How do they determine which students are assigned to which schools? One of the ways they do that is in how they draw attendance boundaries, or basically, how they determine which neighborhoods feed to which schools.
That's actually been an interesting application of AI tools. We can actually use AI to simulate different kinds of assignment policies, different kinds of attendance boundaries, really to see which ones might foster more diverse and integrated schools, while also balancing and taking into account the constraints and preferences and other things that parents might have around travel times or other factors.
Melissa: How might AI be connected to this notion of more integrated, less segregated schooling?
Nabeel: We are working with a couple districts to apply some of these tools I mentioned to help support boundary planning efforts, really with this objective of fostering more diverse and integrated schools. I think that's one application is just helping districts simulate different policies. When I say simulate, I mean just try out quickly in a sandbox, different policies, see what they think the impact of those policies might be on the demographic makeup of schools, on other outcomes that we might care about, and use it almost as a policy planning tool before you go and roll out policies to the general public.
I think another application that's actually really interesting and critical, of course, is you can't just make policies top-down. You don't want to just take the boundaries an AI is suggesting and implement those and that's the end of the day. You really want to create pathways for community members, and particularly, those parents and community members that might not regularly participate in the feedback process to be able to opine on and shape how the AI is making these decisions, and have the AI really take seriously and into account what they're saying.
That's something we're working on with some districts too is, how do we create these collaborative human-plus-AI systems, where the AI is doing what it's good at, which is looking through a bunch of different configurations, a bunch of data quickly, but then surfacing those as proposals to parents who then have a chance to give feedback on those proposals, and then have the AI take that feedback into consideration as it iterates and creates new policies.
Something else we're doing is we're starting to use tools like GPT-3, which is the precursor to ChatGPT and some of these other language models, to try to help districts make sense of open-ended feedback. Parents submit hundreds of written responses to surveys, and it's just time-consuming to read through all of those. If we can use some of these technologies to help do topic modeling and identify clusters and themes, that might at least support some of that work and make it more time-efficient.
I think that's one of the goals, but then, of course, you also want to make sure humans continue to be in the loop. Because at the end of the day, these are social problems, these are human problems. We want to make sure that the technology is really supporting, but frankly, it's not good enough, and I don't think it will be good enough to ever replace some of these very political, very social and very human tasks.
Melissa: So much of how educators often experience the point of what we're doing in classrooms is to develop frameworks and scaffolding for learning, and not just filling in, do you now know this content? I'm wondering about the ways that AI may or may not move us more towards the structure-building or more towards the fluency over content.
Nabeel: That's a great question. I think if our model for learning is really acquiring specific skills and that's learning, then I think we'll continue to probably design AIs-- Every AI has what's called a objective function. The objective function is really just the goal that it's given, the AI needs to optimize for something.
Oftentimes, when we design AI in education, where the objective function is help the student get a higher score on the test, or help them demonstrate that they know how to do X type of math problem better than they did before; to the extent that we see that that is learning, I think we'll continue to design objective functions in AIs that try to optimize that. I think if we see learning as a social process, if we see it as something that requires that scaffolding often from, whether it's a teacher or a mentor or a collection of advisors, whoever that is, then I think that can start to shift the paradigm in how we actually design these systems to support education.
I do think part of the reason we see so much of the AI on content mastery-- Which, by the way, there's no replacement for knowing things. I think that's really critical. It's the foundation, having those skills, but I think it is a limited view of what learning is, what the purpose of education is. There's tremendous people working on AI in education, and I would love to see some of that talent and brain power go towards thinking about some of these more social scaffolding type of issues.
Melissa: Do you have a dreamscape, a vision of what AI could do?
Nabeel: Maybe it's a little cliché, but I think it's really untap human creativity, untap human potential, and I guess, again, from my viewpoint, really bring people together. The reason we're so interested in this issue of segregation is because it's not only harmful in education, it's just harmful to society. We can't build a compassionate country, a compassionate world if we're just, from our earliest stages, not connected to, not learning from, not exposed to one another.
It doesn't seem like a problem that AI is suited to solve, and I certainly don't think AI is going to solve it on its own, but I think to the extent we can deploy these tools to bring us closer together, really help us learn to be- really relish in the tension that comes from engaging with people and ideas that are different from us-- I know these are all aspirational, but I do think there are ways that these emerging technologies can do that. I guess my hope is to see more explorations on that horizon.
Melissa: Nabeel Gillani is an Assistant Professor of Design and Data Analysis at Northeastern University, and Director of the Plural Connections Group. Nabeel, thanks so much for being here today.
Nabeel: Thank you so much for having me.
[music]
Copyright © 2023 New York Public Radio. All rights reserved. Visit our website terms of use at www.wnyc.org for further information.
New York Public Radio transcripts are created on a rush deadline, often by contractors. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of New York Public Radio’s programming is the audio record.