Skip to Content
Humans and technology

Cheat Codex

A close-up look at the ethical dilemmas created by ChatGPT.

April 26, 2023
Mat Honan
Robyn Kessler

Welcome to the Education Issue. I cheated on my editor’s letter. This one that you are reading right now. I’m very sorry. 

Look, I didn’t set out to do this, but the thing about magazines is that they have very hard deadlines, and if you miss them, you’re left with blank pages. So when I realized I only had a few hours left to finalize this, well, I freaked out. 

And then I did what an increasing number of us are doing: I turned to ChatGPT, OpenAI’s massively mind-blowing generative AI software, to help me out. After training it on some of my previous work, I asked about the use of AI in education.

AI is already doing big things in education. By crunching massive amounts of data on student performance, AI algorithms can tailor instruction to fit the needs of individual learners, which can mean big improvements in student outcomes. Chatbots and virtual assistants can provide students with on-the-spot assistance and feedback. Who needs a tutor when you have an intelligent assistant? 

But here’s where things get really exciting: language models like ChatGPT can actually generate human-like text, which makes them perfect for a whole bunch of educational applications. These models can create interactive learning materials, like chatbots that answer students’ questions or create personalized quizzes. They can even generate summaries of complex texts, reports on data sets, or entire essays and research papers.

There are definitely some challenges as well. One of the biggest concerns is the risk of bias in the algorithms. We need to make sure these algorithms are designed and tested in a way that ensures all students get a fair shake. Another, of course, is the potential for cheating. 

But the bottom line is that AI and language models like ChatGPT are going to change the way we think about education, and we need to make sure we’re using these tools in ways that are ethical, equitable, and effective.

So are the preceding four paragraphs, which were generated by ChatGPT and then lightly edited by me, ethical? If they were presented as my own work without an explicit disclosure (like this one), I would argue that the answer is no. And even with such a disclosure, we’re still in a bit of a gray area—there are all sorts of questions about everything from plagiarism to accuracy to the data these models were trained on.    

The reality is that we are in an entirely new place when it comes to the use of AI in education, and it is far from clear what that is going to mean. The world has changed, and there’s no going back. 

As William Douglas Heaven, our senior editor for AI, makes clear in this issue’s cover story, technologies like ChatGPT will have all sorts of genuinely useful and transformative applications in the classroom. Yes, they will almost certainly also be used for cheating. But banishing these kinds of technologies from the classroom, rather than trying to harness them, is shortsighted. Rohan Mehta, a 17-year-old high school student in Pennsylvania, makes a similar argument, suggesting that the path forward starts with a show of faith by letting students experiment with the tool.

Meanwhile, Arian Khameneh takes us inside a classroom in Denmark where students are using mood-­monitoring apps as the country struggles with a huge increase in depression among young people. You’ll also find a story from Moira Donovan about how AI is being used to help further our analysis and understanding of centuries-old texts, transforming humanities research in the process. Joy Lisi Rankin dives deep into the long history of the learn-to-code movement and its evolution toward diversity and inclusion. And please do not miss Susie Cagle’s story about a California school that, rather than having students try to flee from wildfire, hardened its facilities to ride out the flames, and what we can learn from that experience.  

Of course, we have a lot more for you to read, and hopefully think about, as well. And as always, I would love to hear your feedback. You can even use ChatGPT to generate it—I won’t mind.

Thank you,

Mat

@mat/mat.honan@technologyreview.com

Deep Dive

Humans and technology

Unlocking the power of sustainability

A comprehensive sustainability effort embraces technology, shifting from risk reduction to innovation opportunity.

Drive innovation with a tech culture of ‘connect, learn, and apply’

Offer an ongoing curriculum that aligns to strategic priorities and the latest technology to drive innovation, productivity, and social-good efforts.

People are worried that AI will take everyone’s jobs. We’ve been here before.

In a 1938 article, MIT’s president argued that technical progress didn’t mean fewer jobs. He’s still right.

Building a data-driven health care ecosystem

Harnessing data to improve the equity, affordability, and quality of the health care system.

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

Explore more newsletters

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d like to receive.