Until recently, I worked in a Philadelphia-area charter school with a colleague who credits his service in the Army for his capacity to regulate his emotions. As my direct manager — he was the principal, I was an assistant principal — he coached me on this and other leadership skills such as interpersonal communication, and we’ve had countless conversations on not making the work personal. While trying to remain open to feedback and continuous growth, this was, and still is, challenging for me. I’m the type who has to pare down my exclamation points when I edit an email.
This principal sent our staff a holiday greeting email inviting us to “take this time to rest, recharge, and cherish the moment,” reminding us that “holidays are a time for joy, laughter, and togetherness,” and thanking us for being a “beacon of strength for our community.” A few days later, he announced via email the promotion of a colleague who “has been invaluable to our team and the overall success of our organization” and expressed his confidence that she “will continue to excel and make significant contributions in this capacity.”
[Editor’s note: This article is in Root Quarterly’s Spring 2024 print issue, “Goldilocks.” Subscribe to Root Quarterly here.]
At first, I felt moved by the sentiment and emotional energy that I assumed he spent crafting such a poignant and inspired note. But it also didn’t feel right — the way I can tell when a student didn’t do their own work. And so I sent these messages through GPTZero, an AI detection tool, to confirm what I already knew: He didn’t write these emails. (These are not the real quotes; I just asked ChatGPT to generate emails and pulled a few lines that sounded cliché — seriously.)
No one else seemed to notice the subtle shift in his style, and I resisted the urge to tell everyone I knew, to protect him from the shame he’d certainly feel if we discovered his secret, all while wondering: Why does this feel wrong to me? Did he not care about authenticity? Or did it not matter how we felt, because it got the job done? I took it personally. And who was he to coach and evaluate me on interpersonal communication while he himself was using AI for this same purpose?
To me, this is clearly wrong: wrong as in plagiarizing an essay, not citing your sources, deceitful. For others — who, to be clear, also value honesty — AI-assisted writing that helps us both draft and establish the desired tone is simply the latest tool available to under-resourced professionals everywhere.
At larger, more well-heeled institutions, people like my former colleague have a staff, lawyer, or a PR firm to write these kinds of emails. To them, my sense that it’s shameful to not write your own emails is misguided, since many people already don’t, and that’s been the case since time immemorial when a boss would tell his secretary to take down some notes and draft a letter on his behalf.
So, why pick on an administrator at a public school? After all, an editor and copy editor are helping me smooth over this essay. Is that also dishonest to you, the reader?
My real worry is that this ethical and pedagogical problem is already presenting itself in schools, where I’ve spent most of my career, and it’s going to affect the education that students receive. This is yet another seismic technological shift that is upending how schools function, and we are already seeing the impact on academic integrity; there is much more to come. We see the impact of cellphones in school. We see the impact of social media on our kids’ mental health. We can’t just shut it all down or ignore it — the impacts are immediate and pervasive, and we problem-solve as we go.
Students use AI to generate whole essays and pass them off as their own, and we’re barely keeping up with technology as it changes, let alone staying a step ahead of the kids. At this time last year, I found 10 student essays that looked the same, and no one could cite the source. The best I could do was lecture them on academic integrity, call homes, and have them rewrite the essay. I could’ve given them an F, but that’s hard to come back from — these were seniors in the third quarter.
My best guess was that it was AI-generated, but I couldn’t prove it, and this was my first experience with detecting AI: As the technology develops, we now have more tools for detection. Just a few months after this, I demonstrated to a group of high school students how we can detect AI, reminding them of our plagiarism policy and that there would be consequences.
But that threat is, in itself, a half-truth. The consequences are just not there; we’re not set up for it. A large public school system cannot simply fail every student who tries to get away with an AI-generated essay. Teachers do everything they can to help students pass, including re-takes and extended deadlines. Teachers don’t like to accuse students of plagiarism, and it forces uncomfortable and sometimes confrontational conversations. You can’t have everyone failing, and grades are high stakes, especially for high school students applying to colleges.
In some systems, there is pressure to have a percentage of students pass your class, and if you don’t meet that, you’re viewed as the problem, i.e., you didn’t teach it well enough. Universities may be better able to stick to their plagiarism policies — including expulsion — but at this scale, would they want to kick everyone out?
When students struggle with academics or well-being, we teach. And when they struggle with how to ethically use AI? We need to teach.
AI is here to stay. Our system is being put to the test, and the kids won’t stop just because we tell them to. It challenges our integrity and their ability to learn to write, so we need to get really clear on how to use it, define acceptable use, and see it as a tool to transform how we work.
Public schools need to address the problem head-on, just as we did in the pandemic. This is a pivotal shift, one we can handle. We went from in-person instruction to virtual to hybrid and back. One of the results of this is 1:1 tech, now in most school systems. Kids have computers in their hands, and they don’t always use them for good. Today, I picked a student’s smashed laptop out of the trash. I walk into classes and see students watching YouTube videos during instruction. I see students staring at their screens most of the day rather than talking with peers.
On the other hand, I also saw a graphic design class using iPads to develop ideas for improving our facilities. We could throw it all away — put the computers back in carts, restrict their use, try to stop it — or we could offer solutions to teach them to use it well.
We could start by embracing it, by giving an assignment and telling students to use AI to generate the best possible product. They’ll learn it by doing it, and so will we. We can talk about what makes it great. Students can score it on a rubric and consider ways to improve it. Then we can look at how we can replicate it ourselves and use it as a model. And, of course, we need to give teachers the green light to do this deeper learning outside of the curriculum.
AI also has the potential to reduce teachers’ workload. Lesson planning, creating exemplars, developing a rubric, scoring responses, analyzing data — all things AI can help do, just like a personal assistant, if we just take the time to figure out how. It can help us write an email in a tone that we know is necessary but that we struggle to produce ourselves. Maybe the colleague I mentioned earlier is just ahead of the game.
We need to help our children understand that they will be the builders and content creators in the future. It’s important they learn how to write and create, and we need to build their agency to choose how and when to use AI, to be able to spot it and critique it, and to opt in or out when it’s appropriate. Especially at lower-achieving schools, typically urban schools filled with students of color, where there already is an opportunity gap — we cannot let this pass us by.
The most resourced and innovative schools and educators will be curious and eager to try it out for themselves. Private or suburban schools with resources might put an AI team together to pilot this work. Students who are not struggling to meet grade-level proficiency might have more autonomy to design their own learning, and be given the space and trust to experiment with AI. If higher-achieving students are accelerating their learning through critical use and application of AI, and we’re restricting or even denying its use at under-resourced schools, we’re only widening the achievement gap.
My initial emotional response is valid — AI is inauthentic. I had to move beyond my own biases in order to have the courage to face it. Change is scary. In my role as a school leader, I recognize that I’ve been caught up in all the other day-to-day needs that demand my time. Research and experimentation with AI feels just too time-consuming to take on. But the more I read about it, the more I realize I don’t know. The more I play with it, the more questions I have.
The longer we wait for new wide-scale pedagogical practices that come from the top, the further and deeper AI technology will progress, and I don’t want me or my kids scrambling to catch up. At the ground level, we need to start where we are, and be nimble. Whether we like it or not, we need teachers everywhere — who are already on the front lines of inequality — to do what we’ve always done: the best with what we have; share best practices and resources, and send kids out into the world prepared for whatever new era we’re entering into.
Marissa Biondi is a career educator and school administrator who is passionate about equity and educational reform. She has worked in urban public and charter schools in NYC, Camden, and Philadelphia, and loves a philosophical debate about why we do what we do. She’s a workhorse and motivator who drives systemic change with vision and emotional intelligence. She earned her bachelor’s degree in English writing from the University of Pittsburgh and her master’s degree in TESOL from CUNY Lehman College. A someone who doesn’t use social media, she’s excited to write personally and for the first time publicly to use her voice. This piece originally ran in Root Quarterly’s Spring 2024 issue. Subscribe to Root Quarterly here.
MORE ON EDUCATION FROM THE CITIZEN
The Philadelphia Citizen will only publish thoughtful, civil comments. If your post is offensive, not only will we not publish it, we'll laugh at you while hitting delete.