Why Totalitarian Responses to Students’ Generative AI Use Are Not the Answer
And How to Teach them Ethical AI Use in Writing Classrooms
Disclaimer: If you’re one of those people who hates reading those discursive narratives that come before the actual recipes on food blogs, here’s the open resource I’m sharing this week. It’s a training on possible use cases of generative AI in a college first-year writing classroom that I created last weekend instead of enjoying the beautiful Boise spring weather like a normal person. I used my own syllabus policy for the training, but you could easily modify this to fit your own policy. The Google Doc includes links to a PowerPoint and an h5p version of the presentation, which I will also add to Write What Matters. Enjoy!
I took a personal day on Monday, April 8, 2024, knowing full well I would pay for it (I had 60 persuasive arguments and 25 capstone project proposals waiting to be graded on Tuesday morning). But there was an event I couldn’t miss.
Like millions of other Americans, I wanted to see the solar eclipse in totality. I first saw a total solar eclipse in 2017 in Ontario, Oregon, about 50 miles from my home, and it was life changing. Sure, I get the science and all that (my oldest kid is an astrophysics Ph.D. student at Boulder), but this was science magic. The sudden stillness and chilled air, the 360-degree sunrise, the rush of confused birds. The sun’s corona, lacy ribbons of superheated gas, curling and twisting around the moon’s black face. The brilliant burst of the diamond ring as the sun entered and exited totality.
(This is a picture taken by my professional photographer friend Crystal Ivie, who was also in Lebanon, Indiana for the eclipse)
For my second total eclipse, in Lebanon, Indiana, I got all these things and something more: I experienced the rare spiritual beauty of an emergent community in a crowd of strangers.
The 2017 total eclipse was not well attended. Many Boise folks figured since we were getting a 99% eclipse here, the drive didn’t matter. Those who have seen totality know that 99% is not 100%. Not even close. For these two groups, there’s no sense of shared experience when talking about eclipses.
This time, my husband and I landed at a Shell station in Lebanon because the two-lane interstate from Chicago, where we were staying, and Indianapolis, where we originally planned to go, was packed with cars whose occupants had the same brilliant idea we did. We pulled off at the Shell when the eclipse began, heading into the convenience store to purchase snacks.
Five minutes before totality, every available inch of pavement at that gas station was occupied by a vehicle, parked legally or not (nobody cared, and nobody was going anywhere). There were families and couples from Chicago, a young eclipse chaser from the West Coast with a serious looking camera who excitedly shared his photos of the 2017 eclipse, a grandpa with a telescope who let us all take turns viewing the moon’s progress. If we had extra eclipse glasses, we shared them with those who didn’t.
In that moment when the sun disappeared, all of us—Republicans and Democrats, young and old, Black and white, religious and non, cried out in unified awe and wonder.
I don’t know where we find opportunities to create community like this anymore. Certainly not in academic departments, and certainly not around the topic of generative AI use in college courses.
I came home from this magical event to the latest iteration of an ongoing drama around ChatGPT taking place in our philosophy department. I’m an English professor, but I was the philosophy department chair for a couple of years, and I still teach ethics occasionally, so I’m on the group’s email list. Anyway, I tried to stay out of it. I really did. But when one professor declared his fervent desire to “punish” cheaters who used any sort of AI, I finally weighed in. “Punished is such an interesting word choice,” I wrote.
I do not feel the need to “punish” any of my students for anything.
For the record, I very much enjoy doing my own work. I don’t use AI to write these posts (or for the training I created) because I enjoy that kind of labor. “The work is play for mortal stakes,” as Robert Frost says.
But I also think generative AI is a useful tool for template (aka “boring”) writing. I’ve been putting it through several use cases, both in the classroom and in my personal life, since November 2022. My very first prompt checked whether ChatGPT could write our Philosophy 103 signature assignment essay applying two ethical theories (e.g., utilitarianism and deontology) to a specific ethical problem (e.g. eating meat). Since it could do this fairly well, even back then, I strongly recommended that the department create a new and different assessment that was more ChatGPT-proof.
When I do use generative AI, I cite and acknowledge. For example, I didn’t really feel like translating my CV into narrative form when I was up for promotion, so I had Google Bard (now Gemini) do it for me, and I linked to the prompt and the chat in my cover letter explaining exactly what I had done. Sure, I could have added verbs myself. But the assignment felt like busy work to me.
I’ve already written about how we need to rethink assessment in the age of generative artificial intelligence. If we give students busy work, I can completely understand how they might not value it or the labor that goes into it. Also, like many first-year writing instructors, I have been teaching template writing for many years now. I used to use the excellent Graff and Birkenstein primer They Say/I Say before I switched to an open education resource.
But my question for my philosophy colleague is this: How are students supposed to know they are cheating when they use generative AI tools?
I mean, there was literally a Superbowl ad for “your everyday Copilot.” When I open Google docs, there’s a magic wand icon that says “help me write.”
I personally cannot adopt the mindset that students want to cheat. My students are all terrified that they will be accused of plagiarism. I think students want guidance from us. What are these tools? How can they be useful? When should their use be avoided? Who can students go to for help if they aren’t sure? And what are the ethical concerns around these tools? That’s a question I wish more of us had asked when social media rolled out in the mid-2000s.
So I decided to create a training designed for my own students, openly license it, and share it with you in the hopes that it will spark some conversations. Some of you (or your colleagues) may not have used chatbots in these ways. This tutorial will show you how generative AI can be used for both good and evil throughout the stages of the writing process.
We are teachers. These tools are new. They rolled out without warning and without a thorough consideration of their ethical consequences. But they are here, and students need us to teach about them.
We also have a unique opportunity to impress on our students the gravity of the moral problems associated with generative AI development. We can teach them about bias, privacy, consent, disinformation (climate change!!!).
Our students want us to be leaders on AI. They do not want us to punish them for trying to learn.
If you made it this far and didn’t grab the resources at the beginning of the essay, here, at last, is the recipe. Please reach out if you have questions or feedback—it’s definitely a work in progress like everything I share. I have really appreciated the online emergent communities that have supported and helped me as I find my way through this strange new world.
This Google Doc includes links to a PowerPoint and an h5p version of a presentation designed to help students apply a “middle of the road” generative AI syllabus policy to several possible use cases in drafting a research paper for a first-year writing course. You can easily modify it to fit your own course and policy.
Bon Appetit! And if you didn’t see the American eclipse, here are some future travel opportunities from the New York Times. (Do it!)