Not too long ago, I had the pleasure of working as an instructor for a graduate information architecture course. As an asynchronous, remote course, most of the class interaction took place online through written postings. On occasion, I experienced something that I suspect has become common to teachers—the feeling that some of the responses my students had written had been AI-generated. The famous telltale signs were there: contrastive parallelism, three-item lists, and of course, the em-dash (now referred to by some people as the “ChatGPT dash”).
Now, the course actually had a policy for AI: students were to cite AI-generated content as if it was any other source. But the suspicious content I saw lacked any such citation of ChatGPT or Claude or whatever. I pasted the content into a tool that purported to verify whether the content was human-generated. It would declare that large chunks of it came from AI. But how could I be sure? I couldn’t. None of the hallmarks of AI writing are things humans couldn’t have created. AI draws its knowledge from human-written sources, of course. (I myself am a fan of the em-dash, as you have already seen in this post.)
Ultimately, I never said anything about the suspicious postings. I gave my students the benefit of the doubt. Not every educator takes this approach, however—and not every accusation is warranted, either. I have started to see students voicing their frustrations online at having been falsely accused of using AI to write their assignments. The AI age, among its many disruptions, has brought with it a culture of paranoia targeting any young person who can genuinely write well.
“You Aren’t Capable of This”
Much thought has been given to the question of how AI writing can sound more human. I’m more interested in the reverse: How can human writing sound less like AI?
I am an adult who is out of my schooling years. I guess I have “permission” to write well. It is conceivable that I, in fact, know what an Oxford comma is. But what if I was a precocious schoolchild who really, truly does write better than average? How could I convince my teachers that my words came from… me?
It’s got to hurt to be accused of using AI; the accusation goes deeper than saying, “You were lazy and took the easy way out.” It’s like being told, “I don’t believe you’re capable of this.” But how can you prove them wrong?
The most obvious method to prove that your writing is genuine is to write in front of others, in a controlled, proctored environment. Eyes on you at all times, no screens allowed, cavity searched for devices that can vibrate in Morse code. Maybe that will have to be what teachers resort to (minus that last part, hopefully).
The Take-Home Essay is Cooked
When it comes to take-home essays, what can you do as an instructor who wants to prevent students from using AI? Honestly—nothing, as far as I can tell.
There are a handful of things that student writers can prove, such as keystroke behavior or a recorded sequence of drafts. Some teachers now require students to submit not only their finalized papers but also recordings of their document revision history. (A paragraph that suddenly appears—fully formed—out of nowhere reeks of copy-paste.) Still, even in these settings, the content could have been first generated off-screen, copied over one keystroke at a time. Conceivably, one could even train an AI agent to mimic the start-stop nature of human authorship patterns, complete with incremental sentence evolution, typos, rewrites, and hesitation edits.
To avoid being falsely accused of having used AI, some students wonder if they ought to introduce deliberate imperfections into their writing. After all, one of the hallmarks of AI-generated text is its cleanliness:
Because large language models work by predicting the next word in a sentence, they are more likely to use common words like “the,” “it,” or “is” instead of wonky, rare words. This is exactly the kind of text that automated detector systems are good at picking up, Ippolito and a team of researchers at Google found in research they published in 2019.
But Ippolito’s study also showed something interesting: the human participants tended to think this kind of “clean” text looked better and contained fewer mistakes, and thus that it must have been written by a person.
In reality, human-written text is riddled with typos and is incredibly variable, incorporating different styles and slang, while “language models very, very rarely make typos. They’re much better at generating perfect texts,” Ippolito says.
“A typo in the text is actually a really good indicator that it was human written,” she adds.
Source: https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/
This kind of strategy has been employed by students trying to pull one over their teachers for years. Vivek Ramaswamy wrote about this in one of his books:
It reminded me of a kid in my high school French class who used to cheat by stealing the answer key to every test… except he would always intentionally write the wrong answer to just one question each time, as a decoy to prevent him from getting caught.
p. 197, Woke, Inc.
But “getting caught” in this case would be… what, getting caught at actually being a good writer? That’s twisted.
Short of outright typos, another strategy is to simply dumb down one’s grammatical complexity. One poster on Hacker News believes that the em-dash should now be avoided, for example:
The em dash is now a GPT-ism and is not advisable unless you want people to think your writing is the output of a LLM.
What AI Can’t Write
I can think of a couple other ways to signal that a piece of writing did not generate from AI: first, to write about things that AI refuses to opine on.
For example, I once asked DeepSeek, a Chinese LLM, to tell me why Tiananmen Square is famous in the West. It started to type a response and then suddenly its answer vanished, bluntly telling me that it was unable to complete that request. Therefore, if I turn in a thoughtful essay about certain events that happened at that spot in the spring of 1989, my teachers could at least be sure that I didn’t use a Chinese LLM to generate my paper. Of course, I may have used an American LLM instead. So how about I write about something taboo in the West? Well, even then, the American models will probably prove less prone to filter themselves than expected. One way or another, anyone can conceivably find a way to get an LLM to generate text on any topic from any possible position, no matter how ideological, contrarian, polemic, or flawed it may be.
The other way I suppose one could signal that their words could not have come from AI is to write about things that are too new to possibly be part of an AI’s knowledge base. If I write a review of a movie immediately after stepping out of its world premiere, readers can tell that only someone who was actually there would be able to write about key details of the film. AI would only be able to write something generic that might sound true but wouldn’t actually be a genuine review. This is hardly a sustainable strategy, though, since LLMs have gotten better at scouring the web and incorporating recent social media posts into their responses.
Curation is Writing
I don’t think that anyone should feel pressured into deliberately making their own writing worse, just to give the impression that their words didn’t come from AI. If you know how to write well, be proud of that. Use the em-dash to your heart’s content. The burden of proof lies with the accuser, not the accused.
Plus, even if you did use AI to tighten up your writing, that may not be the worst thing in the world. Of course, it raises questions of what makes writing “pure.” What makes writing “yours”? If you had an editor, or used spellcheck, or looked up a synonym in a thesaurus, does that deny you true authorship? There is undoubtedly room for debate on this point, but I would say that as long as the core ideas, structure, and argument came from you, you can argue that it’s yours.
If an LLM generates a choice of sentences for you and you decide to keep some and ignore others, that’s curation, and curation is a form of authorship. But letting AI do the bulk of the writing, where you delete only a fraction of its sentences but keep the majority—that doesn’t feel like true writing to me. Maybe you disagree. The line where authorship ends is debatable.
Writing is Thinking
I’m reminded of what I wrote about in my Looking Forward to 2026 post where I talked about how the real value of writing is the process of refining and clarifying your thinking. The article that emerges at the end of the writing process is the artifact, but there is purpose in the journey of getting there. To truly write something yourself is to simultaneously prepare yourself to talk about it, defend it, debate it, and deepen it. That’s why there is value in writing even if no one reads what you wrote.
Doctoral students don’t need to worry as much about being accused of writing with AI as much as undergraduate students because unlike the undergrads, they need to give an oral thesis defense at the end of their schooling. They stand in front of others, in the flesh, and prove that the information lives in their brain. That’s the best way to respond to anyone who accuses you of having used AI to write; calmly tell them that you are ready to talk through and defend the content of your writing at any moment.
The proliferation of AI feels like a sort of catastrophe in education, but there were previous “catastrophes” that the institution managed to move past: calculators, search engines, Wikipedia, online essay mills. To overcome this latest disruption, educators may have no choice but to resort to more face-to-face, in-person methods for evaluating student understanding. They will need to check understanding in ways other than reading a paper.
The true question teachers need to ask of their students is not, “Did they use AI to write this?” but rather “Did they understand what they wrote?” This is something that can only really be proven in-person. If you have truly written what you say you’ve written—if you have wrestled through that process—you are more than prepared to demonstrate that understanding.


