Jelly Beans for Grapes: How AI Can Erode Students’ Creativity

Let me try to communicate what it feels like to be an English teacher in 2025. Reading an AI-generated text is like eating a jelly bean when you’ve been told to expect a grape. Not bad, but not… real.

The artificial taste is only part of the insult. There is also the gaslighting. Stanford professor Jane Riskin describes AI-generated essays as “flat, featureless… the literary equivalent of fluorescent lighting.” At its best, reading student papers can feel like sitting in the sun of human thought and expression. But then two clicks and you find yourself in a windowless, fluorescent-lit room eating dollar-store jelly beans.

Thomas David Moore

There is nothing new about students trying to get one over on their teachers — there are probably cuneiform tablets about it — but when students use AI to generate what Shannon Vallor, philosopher of technology at the University of Edinburgh, calls a “truth-shaped word collage,” they are not only gaslighting the people trying to teach them, they are gaslighting themselves. In the words of Tulane professor Stan Oklobdzija, asking a computer to write an essay for you is the equivalent of “going to the gym and having robots lift the weights for you.”

In the same way that the amount of weight you can lift is the proof of your training, lifting weights is training; writing is both the evidence of learning and a learning experience. Most of the learning we do in school is mental strengthening: thinking, imagining, reasoning, evaluating, judging. AI removes this work, and leaves a student unable to do the mental lifting that is the proof of an education.

Research supports the reality of this problem. A recent study at the MIT Media Lab found that the use of AI tools diminishes the kind of neural connectivity associated with learning, warning that “while LLMs (large language models) offer immediate convenience, [these] findings highlight potential cognitive costs.”

In this way, AI is an existential threat to education and we must take this threat seriously.

Human v. Humanoid

Why are we fascinated by these tools? Is it a matter of shiny-ball chasing or does the fascination with AI reveal something older, deeper and more potentially worrisome about human nature? In her book The AI Mirror, Vallor uses the myth of Narcissus to suggest that the seeming “humanity” of computer-generated text is a hallucination of our own minds onto which we project our fears and dreams.

Jacques Offenbach’s 1851 opera, “The Tales of Hoffmann,” is another metaphor for our contemporary situation. In Act I, the foolish and lovesick Hoffmann falls in love with an automaton named Olympia. Exploring the connection to our current love affair with AI, New York Times critic Jason Farago observed that in a recent production at the Met, soprano Erin Morley emphasized Olympia’s artificiality by adding “some extra-high notes — almost nonhumanly high — absent from Offenbach’s score.” I remember this moment, and the electric charge that shot through the audience. Morley was playing the 19th-century version of artificial intelligence, but the choice to imagine notes beyond those written in the score was supremely human — the kind of bold, human intelligence that I fear might be slipping from my students’ writing.

Hoffmann doesn’t fall in love with the automaton Olympia, or even perceive her as anything more than an animated doll, until he puts on a pair of rose-colored glasses touted by the optician Coppelius as “eyes that show you what you want to see.” Hoffmann and the doll waltz across the stage while the clear-eyed onlookers gape and laugh. When his glasses fall off, Hoffmann finally sees Olympia for what she is: “A mere machine! A painted doll!”

… A fraud.

So here we are: stuck between AI dreams and classroom realities.

Approach With Caution

Are we being sold deceptive glasses? Do we already have them on? The hype around AI cannot be overstated. This summer, a provision of the vast budget bill that would have prohibited states from passing laws regulating AI almost cleared Congress before being struck down at the last minute. Meanwhile, companies like Oracle, SoftBank and OpenAI are projected to invest $3 trillion in AI over the next three years. In the first half of this year, AI contributed more to real GDP than consumer spending. These are reality-distorting numbers.

While the greatness and promise of AI are still, and may always be, in the future, the corporate prophecies can be both enticing and foreboding. Sam Altman, CEO of OpenAI, creator of ChatGPT, estimates that AI will eliminate up to 70 percent of current jobs. “Writing a paper the old-fashioned way is not going to be the thing,” Altman told the Harvard Gazette. “Using the tool to best discover and express, to communicate ideas, I think that’s where things are going to go in the future.”

Teachers who are more invested in the power of thinking and writing than they are in the financial success of AI companies might disagree.

So if we take the glasses off for a moment, what can we do? Let’s start with what is within our control. As teachers and curriculum leaders, we need to be wary about the way we assess. The lure of AI is great and although some students will resist it, many (or most!) will not. A college student recently told The New Yorker that “everyone he knew used ChatGPT in some fashion.” This is in line with what teachers have heard from candid students.

Adjusting for this reality will mean embracing alternative assessment options, such as in-class assignments, oral presentations and ungraded projects that emphasize learning. These assessments would take more class time but might be necessary if we want to know how students use their minds and not their computers.

Next, we need to critically question the intrusion of AI in our classrooms and schools. We must resist the hype. It is difficult to oppose a leadership that has fully embraced the lofty promises of AI but one place to start the conversation is with a question Emily M. Bender and Alex Hanna ask in their 2025 book The AI Con: “Are these systems being described as human?” Asking this question is a rational way to clear our vision of what these tools can and can’t do. Computers are not, and cannot be, intelligent. They cannot imagine, dream or create. They are not and never will be human.

Pen, Paper, Poetry

In June, as we approached the end of a poetry unit that contained too many fluorescent poems, I told my class to close their laptops. I handed out lined paper and said that from now on we would be writing our poems by hand, in class, and only in class. Some guilty shifting in chairs, a cloudy groan, but soon students were searching their minds for words, for rhyming words, and for words that might precede rhymes. I told a student to go through the alphabet and speak the words aloud to find the matching sounds: booed, cooed, dude, food, good, hood, etc.

“But good doesn’t rhyme with food…”

“Not perfectly,” I replied, “but it’s a slant rhyme, perfectly acceptable.”

Rather than writing four or five forms of poetry, we had time only for three, but these were their poems, their voices. A student looked up from the page, and then looked down and wrote, and scratched out, and wrote again. I could feel the sparks of imagination spread through the room, mental pathways being crafted, synapses snapping, networks forming.

It felt good. It felt human, like your sense of taste returning after a brief illness.

No longer fluorescent and artificial, it felt real.

Leave a Comment