According to Mashable, a recent Reddit discussion has surfaced six new linguistic patterns that users believe are dead giveaways for AI-generated text, moving beyond the infamous “ChatGPT dash” or em dash. The list includes phrases like “and honestly?” and “no fluff,” as well as structural habits like overusing short, fragmented sentences and the “it’s not X, it’s Y” contrast format. Redditors also point to excessive signposting with words like “firstly” and “secondly,” along with generic, hollow engagement prompts like “I’m curious what others think.” The report notes that while real humans use these techniques too, their concentrated and formulaic appearance in text is now seen as a major red flag. This crowdsourced detection effort comes as AI companies continuously update their models, making old tells obsolete.
The AI Uncanny Valley of Prose
Here’s the thing about these “tells.” They’re not really mistakes. They’re actually the AI trying too hard to sound human. It’s learned from our writing that using an em dash adds flow, that signposting creates structure, and that asking questions fosters engagement. So it applies these tools with the subtlety of a sledgehammer. The result is this weird, performative prose that feels like it’s following a checklist for “Authentic Human Communication.” It’s the literary equivalent of an alien wearing a human skin suit—it gets the general shape right, but the mannerisms are just off. And honestly? It’s getting easier to spot.
Why AI Can’t Stop Itself
So why does this keep happening? Basically, large language models are statistical pattern machines. They see that phrases like “and honestly?” or structures like short, punchy sentences are correlated with persuasive or emotionally charged text in their training data. To maximize the perceived “quality” or “human-ness” of its output, the model leans into these patterns. It’s optimizing for a score, not for genuine understanding. As one analysis points out, the em dash surge coincided with AI going mainstream because the models identified it as a high-value punctuation mark. Now that we’ve collectively flagged that, the AI will over-optimize for the next thing, and the cycle continues. It’s a weird game of whack-a-mole between human pattern recognition and artificial pattern replication.
The Real Victim Is Trust
This has a chilling effect that goes beyond cringey LinkedIn posts. When every other piece of content uses the same hollow, formulaic hooks, it erodes trust in all communication. Is that heartfelt reflection from a colleague genuine, or is it a bot using “contrast framing” to seem deep? The overuse of these techniques by AI pollutes the well for everyone, even academic writers who might just love a good em dash. The most damning clue highlighted by Redditors isn’t a phrase—it’s the lack of engagement. An account that says “I’m curious what others think” and then never responds is probably not a person. That’s the ultimate tell: the absence of a human on the other end. It makes you want to scream into the void, or maybe just visit this helpful website.
An Arms Race With No Winner
What’s the endgame here? We’re in a weird arms race where humans look for patterns, AI learns to hide them, and then we find new ones. The original Reddit thread is a fascinating snapshot of this collective detective work. Users are already sharing prompts to force AI to avoid these tics, but as one commenter notes, it rarely helps for long. The core issue is that AI isn’t communicating; it’s assembling. It’s pulling from a vast dataset of human expression, but without intent, experience, or a point of view. So it will always be a step behind, mimicking the shell of good writing without the substance. For now, if you see a cascade of “It’s not a setback. It’s a setup.”-style sentences, you can be pretty sure you’re not dealing with a literary genius. You’re probably dealing with a very eager-to-please robot.
