Is artificial intelligence just another tool — like a calculator — in the pursuit of education, or will the ease of an immediate answer spit out by ChatGPT make it so the future workforce of America loses its ability to solve problems without the use of technology?
On this episode of “Deseret Voices,” New York Magazine’s James Walsh shares his thoughts on the impact on academia.
At a time when AI can make a diploma seem worthless, Walsh ponders how teachers can navigate the new issues they face and how students can find meaning in their studies.
Subscribe to “Deseret Voices” on YouTube, Apple Podcasts or Spotify.
Note: Transcript edited by Steven Watkins
Jane Clayson Johnson: AI is rewriting the rules of education in real time.
This week on Deseret Voices, I’m joined by James Walsh, senior writer at New York Magazine.
His article, “Everyone Is Cheating Their Way Through College,” has sparked a nationwide conversation about the growing role of generative AI in academics.
Students are turning to AI to write essays, complete assignments, and even bypass the reading altogether.
Professors are scrambling to keep up—trying to detect AI-generated work and maintain fairness in the classroom.
So what does education look like when AI can do the thinking for us? And where, exactly, is the line between a helpful tool … and an academic shortcut?
James, it’s so great to have you. Thanks for being here.
James Walsh: Oh, thanks so much for having me.
JCJ: You write that ChatGPT has unraveled the entire academic project. Tell us what you mean by that.
JW: ChatGPT has made cheating so easy. It’s so easy to take an assignment, put it into ChatGPT or another AI, LLM. And then submit that assignment and pass it off as your own. Or short of that, you know, use an LLM to come up with ideas or outline an essay. And so what that has done, if you really zoom out, it’s made professors and administrators, you know, force them to ask the question, why are we assigning this work? If it’s so easily hackable, what are we asking these students to get out of it? Do we assign essays just simply to teach them to write? What is the value of that? And when you ask that question, you very quickly kind of can ask yourself, why are we here? You know, why are students here? And why am I here in front of this classroom, teaching them?
JCJ: And so from your reporting, give us a sense of the scale of this. Is this state schools? Is this the Ivies? Is it community college? Is it all of it? Everyone’s grappling with this?
JW: Oh, everyone’s grappling with it. And it’s not just higher education. It’s also high school, even some, you know, middle school teachers talk to me about it, but it will be really focused on college.
JCJ: And so how are college students using AI today?
JW: In the classroom, they’re using it to fully complete assignments in some cases, fully write essays for them. But there are also, of course, very productive use cases. You know, having AI analyze data in the lab, or you know, just bounce ideas off it.
JCJ: But your piece focuses on and what the real concern is, is students who are using AI to, for example, write an entire essay in an hour, right? I mean, it’s this sort of “cheating” element of this that is so concerning.
JW: Right. It is the greatest cheating tool that college campuses have ever had to face, because it’s so accessible to everyone. And it’s just very good at doing a lot of the things we asked college students to do. So, students will use it to produce essays, outline essays, come up with, you know, the dreaded topic sentence, or thesis of essays. And that’s where it gets kind of tricky because many students have convinced themselves that using it for, you know, say, coming up with a topic sentence is not necessarily cheating because they can then build an essay off that.
JCJ: So give us a sense of what the students said to you, because I thought from your piece they were remarkably candid about their use of AI. How did they view it?
JW: There are certainly students who understand it as a cheating tool and won’t use it or use it and knowing that they’re cheating. But many students will rationalize it by saying, this is a tool that I will have for the rest of my life, so I might as well be using it now. And, you know, it will serve me later on, in fact, if, you know, my fluency in AI is being built right now.
JCJ: Give us an example of a student that you interviewed.
JW: You know, one student, Wendy, she had an essay that was due and she woke up the morning it was due, sat in front of her computer and basically engineered this essay using ChatGPT where she said, can you please create an essay that answers this very specific question? Which she copy and pasted from her professor into ChatGPT. Can you please create a thesis for each paragraph, and then some bullet points as she was physically writing the essay herself, you know, word for word. The thinking, you know, she was sort of outsourcing.
She surprised me because she really understood that there was something lost in doing this. She said, you know, “I miss writing the way I did in high school.” But just because all the students around her and this amazing tool is available now, she felt as if, you know, it was necessary for her to use it.
One student told me, you know, I asked him whether he really understood the value of a handwritten, physical note. And he said, “Yes, of course I do. And that’s why I do send them occasionally,” you know, and then he added, “after I draft the letter using AI.” And so there was this kind of fundamental difference, where he understood the value of the physical note. But I, of course, was talking about sitting down and writing a note, you know, not using AI.
JCJ: So professors and administrators, I mean they’re calling this a crisis, right? Tell us how they’re reacting to all this.
JW: Many professors I spoke to are really overwhelmed, you know, the introduction of this tool so quickly has turned them into kind of, you know, cops in a sense. Not only do they have to work to teach the material and come up with these assignments and grade them, now they have to be on the lookout for essays that are written by AI. And they are overwhelmed because so many of them seem to be. And there’s not much to do because, you know, it’s very hard, one, to prove, you know, beyond the shadow of a doubt that something is written by AI. So leveling that accusation is quite serious. And so you have these professors just wondering, what am I doing here? You know, why, you know, why are we even pretending like these students are fulfilling their obligations as students?
JCJ: Right. And you write that professors may think they’re good at detecting AI work, but actually, studies have shown they are not. One study used fake student profiles to slip AI-generated work into the system. And professors failed to flag it 97% of the time.
JW: Exactly. And AI is only getting better. So far, it’s only gotten better at sounding more human.
JCJ: In the piece, you talk about how you actually fed a student essay that you knew was AI-generated, you fed it into an AI detector. It came back with a score of 11.74, which was low, knowing that this was an AI-generated piece. Then you put text from the book of Genesis into the same AI detector program and that came back with a score of 98%, which means the computer thought it was AI-generated. So, I mean, there are real accuracy issues in this.
JW: These detectors are not all that reliable, and I keep on coming back to the idea that, OK, if I sit down and write this essay based on ideas generated by AI, what did I get out of that essay? And is that cheating? You know, part of an essay is reading the text and sitting and thinking about what you’ve read and then coming up with ideas that feel original to you because they’re coming out of your own brain. And one way that students are using AI, you know, you don’t have to actually even read the text because it can summarize text for you. So I may be writing my own original essay, but that doesn’t mean I did the actual reading.
And there are all these sorts of tricks that students can use to to hide the hand of AI in their essays.
JCJ: Yeah.
JW: There’s one where, you know, if a student has AI produce an essay in English, they can put it into a translation device and then to another language and then go back to English, and that will somehow protect it from being flagged as AI-generated.
There are, you know, now customizable AI’s that will, you know, really work hard to put it in your voice if you upload other essays. So we’re not going to police our way out of this problem.
That being said, I think there is, sort of, an acceptance among students, a resignation that, all students are doing this.
JCJ: Oh there is, an acceptance or resignation to it?
JW: Many, many students, certainly. After the story came out, I was surprised by how many students e-mailed me and got in touch to say thank you for writing about this because I am so frustrated with my classmates at school.
There is pressure, right? I understand that as a college student, there was, you know, pressure to produce and you’re going to be graded against your peers. So you want to have that edge.
JCJ: Speaking of grades, you talked in the piece about how it kind of skews the grading system, because if a student who’s getting a good grade on an AI-generated paper, I mean, they’re matched up against someone who may have not used AI, but maybe it wasn’t as good. So, I mean, it just, it throws the whole thing into chaos.
JW: Exactly. One teaching assistant I spoke to was part of the frustration and, in fact, why he actually decided to leave teaching altogether. Because he would be looking at two essays and one was pretty good, was a decent essay, but he could tell that it was AI-generated.
Another one I think he described as, you know, basically, you know, difficult to read, you know. It was so obvious that the student hadn’t used AI and he just had no idea how to treat those two papers side by side.
JCJ: Yeah. It’s a real conflict for these professors. Yeah.
JW: Sure.
JCJ: And professors you write about trying to AI-proof their assignments even. Tell us how they’re doing that in some cases.
JW: AI-proofing an assignment is actually one. Fundamentally asking, you know, what the value like we talked about, of this assignment is and why we’re assigning it. Some professors have switched to oral exams, with some success.
Of course, they understand that something is lost and that the student who does well writing a paper maybe isn’t the student who does well sitting in front of them.
And for an oral exam, it takes quite a bit of a professor’s time, especially for larger classes, but it’s a chance to connect and, you know, make sure students understand the material.
JCJ: So oral exam. Some are going back to the Blue Book essay writing. You know something the old-fashioned way. What we used to do. Yeah.
JW: Yeah. Of course. Blue Book sales are apparently on the rise again. I’m glad for those Blue Book makers.
JCJ: You write about professors sticking strange phrases in small text in between paragraphs of an essay prompt, theoretically, that would cause ChatGPT to insert a non sequitur into an essay. You know, you write about one professor putting the word “broccoli” into an assignment or mention Finland, or mentioned Dua Lipa in the assignment, right? Just to kind of throw off the calculus.
JW: Right. That was kind of one of my favorite. And it’s something that, a number of professors are doing, and it’s really, you know, designed to catch kind of the laziest of lazy students because that would, you know, should a student turn in an assignment that then mentions broccoli or Dua Lipa? It basically means that not only did this student not write their essay, but, also they didn’t read the essay that they turned in. So, that might be viable, you know, way to really catch the worst offenders. But it doesn’t necessarily cover the core issue.
JCJ: So I’m just really curious about this idea that these students, many of them that you spoke with, did not consider this cheating. I mean, the honesty and this shameless self-awareness of that is really interesting to me and kind of at the heart of this story.
JW: We’re told and people are telling students and everybody that this is a tool that AI is a tool, you know, I heard a lot of comparisons to calculators, for example.
It’s a tool that you’re going to be using for the rest of your life.
It’s something that you need to understand as you enter the workforce because AI is going to change how we work. So, I think the justification that I heard a lot from students was why shouldn’t we be using this? You know, that is kind of their attitude. And so then it becomes the role of mentors and teachers and parents to explain, there is a value. The reason you’re paying for this education, is not just to prepare you for the workforce. It’s also to broaden your horizons and, you know, beef up the way you think, and the way you think critically about the world around you.
JCJ: Talk about the intellectual decay as you speak about in the piece. Some early research that you quote shows memory problems, problem-solving issues, creativity suffers when students offload these cognitive chores to chatbots, never mind critical thinking skills.
JW: The research that is out there on how AI shapes our brains or changes the way we think.
You know, we don’t have really a large corpus of research to depend on right now. That being said, it’s pretty simple to understand that the less thinking you do, the less, you know, your brain works.
That just sort of, seems like a logical leap. So, this idea of cognitive offloading is something that is really freaking a lot of people out. You know, the idea that we have a generation of students, you know, literacy is already kind of plummeting.
JCJ: Yeah.
JW: And so, what we lose from that is critical thinking, the ability to sort fact from fiction, just the ability to interpret the world around us. And that is something that’s really scary. That we could have an entire class of college graduates leaving university without, you know, the facility to think and that, you know, brings us back to the start of the conversation, this existential crisis, the actual literal value of your diploma might change, given so many students are depending on AI to get through college.
JCJ: So it would be devalued.
JW: Yes. That is something people are very nervous about. Yes.
JCJ: Well, it’s interesting because you talk about in the piece about how OpenAI has not been shy about marketing to college students. So you write that recently, ChatGPT Plus, which is normally $20 a month, OpenAI made it free during finals. So these AI companies, they’re playing into this in some way. Are they not?
JW: Oh, yeah. And I think they are, I know they are very much aware of what’s happening. And, you know, you can look at the statistics in terms of visits to a site like ChatGPT that’s invariably, you know, go down in the summer because school is out. That’s how frequently students are using it. And it’s, you know, the top visited site on many college campuses.
So, you know, everybody is aware that students are using it and how they’re using it. And AI has not been shy about partnering with universities across the country. And sort of selling in many ways the idea of AI, the way it sold coding, you know, 10 years ago, there’s the sense that we need to train the next generation of the U.S labor force to work in AI because it’s so critical to the economy and to defense and so those partnerships are really strong on university campuses and it certainly has an influence on, you know, some administrators and how they think about AI on campus.
JCJ: Is there evidence that university honor codes matter? I mean, do students who have access or an understanding of their university’s honor code change the way they use AI, or is do we know?
JW: The honor code is interesting. It was sort of hard to parse.
I actually did go and pull the number of violations happening at universities across the country.
On a case by case, you know, luckily, you could go to a certain university and many of them would have the kind of end of year report. And there was such a spike of cheating cases during COVID that it’s hard to understand where that ends and where AI begins. You know, before AI, COVID was really the golden age of cheating on university campuses.
So it’s, that in a way, kind of, I think, prepared students or like at least, you know, raised their tolerance for cheating.
JCJ: What do you see as the most urgent question for universities to grapple with moving forward? And maybe for students to understand?
JW: Where do we draw the line? At what point does AI infringe on an education? And that’s a really big question. So I think that is what every professor needs to be asking.
Why am I assigning this and to what extent should a student be permitted to consult AI? Because it’s not going away. It’s kind of the same question, to be honest, that, you know, I had to ask myself, anybody who went to college, you know, I had to ask, you know, if college is very expensive, what am I paying for? You know, is it to have more free time on the weekends or at night? Is it, you know, am I paying to learn how to hack and cut corners, or am I paying to, you know, engage with the material that’s in front of me?
It’s a really hard question to ask, especially when you’re 18 and 19 and your brain hasn’t fully formed.
JCJ: Some really interesting questions and important answers that we’ve heard from you today. James Walsh, thank you so much.
JW: Thank you for having me.


