How do we humans fit into the loop of computers crafting content for other computers?
Who remembers Welcome to Night Vale? The podcast aired a radio hour from the town of Night Vale where ghastly news items were reported in monotone. The stream felt unending, the information conveyed so matter of factly. It was so easy to listen to that I used it as a way to fall asleep. But when you got past the radio hour familiarities—the chimes, the broadcaster voice, the sounds of the studio—the substance ranged from eerie reports to awful accidents.
But none of it sounded human enough to warrant fear. The only thing that made it familiar was the archetypal way the news of the town was reported; it was an upside down town, for aliens perhaps, that copied a familiar human—Americana to narrow it further—style of communication, and ended up sounding like a horrific NPR.
When I read back what AI tools produce for me, I can’t help but think of Night Vale. ChatGPT, followed by many successors, adopted a user-friendly, neutral, yet confident voice for itself, then masqueraded as each of us when asked to write on our behalf.
What we’re seeing in the mirror is amplified versions of what we wrote or prompted, somewhat mangled, and a bit overenthusiastic (or declarative). Maybe even more a fan of the em dash than us. But overall, we’re beginning to recognize patterns. The em dash is a hot topic of conversation exactly because of how prolific it has become amongst AI authors.
We’re all noticing the arc toward homogeneity; this has become a big topic of conversation for design, as screens are pumped out by Cursor and Stitch. This is paralleled in writing, no matter the topic or medium, whether writing this very blog post or copy for a push notification.
This production carries a sameness that is harder to pin down than AI slop. It’s not clumsy, nor a hallucination. It’s not ostentatiously fake. It’s more refined and more neutral, yet homogenous in style, and also in thought. Even when we begin with an innovative idea or new hot take, it gets lost in the stream when it’s picked up and recycled immediately, and compounds on itself. It’s becoming background noise, like Welcome to Night Vale.
I’ve begun referring to it as inference noise. When human review narrows or becomes non-existent, or when models are looped together without enough friction, each AI intervention adds a layer of confident paraphrasing.
The original intent gets laundered through enough inference that what emerges may be technically coherent but meaningfully adrift of the original message. It’s the modern-day telephone game, but the static is invisible because the output is fluent enough. You never get back to the source to find out the original phrase was, “AI is prone to hallucinations,” not “AI likes daydreaming in its free time.”
There are some easy patterns you can pick up: em-dashes, declarative sentences, groups of three. But as we refine models, and provide personal preference (I told Claude I prefer a comma), the punctuation signifiers will grow less intense. We perhaps continue to recognize the tone or the homogeneity itself, because it’s in the same confident, calm register until it stops sounding like anyone other than AI. That’s why it’s harder to catch; it doesn’t sound exactly wrong—it’s the quieter, older sibling. And likely more of an insidious problem, as we’ll lose intentionality and increase brain rot.
Look at LinkedIn. Many thought pieces are well-structured, properly hedged, and hit all the expected beats. A lot of them may have been prompted into existence in under a minute. We read them, engage with them, let them shape (often subconsciously) our distributed consensus; we comment and maybe write our own spin-offs. Are we building a shared understanding off of one another, or are we just on standby while models interpret other models?
We used to keep an appraising eye on what was written by a human versus a computer. Think of early days ChatGPT use on college admissions essays and cases of plagiarism. The differentiation is becoming less and less easy to spot, and the line is blurring, so the conversation has shifted accordingly. Now as we move rapidly into co-creation territory, we’ll want to discern what counts as epistemic agency—the difference between having processed information and having genuinely constructed your own knowledge. Models are very good at the former. The latter is still owned by humans, and more valuable than ever.
Not just for a moralistic pride in human-made work, but more pragmatically, because the ability to recognise subtle inaccuracies or losses in meaning—to feel that gap between nearly right and actually right—depends on having built that knowledge yourself and understanding what it looks like to convey it correctly.
Inference noise is inevitable, but it will strengthen our critical thinking muscles if we choose to seek it out and recognize new patterns. REsisting it will require knowing our own signals—exactly what we want to convey—clearly enough to recognize when we’re no longer hearing it.

