A stochastic parrot goes into a bar ...
My backlash to the AI backlash
[Transparency declaration: None of this was written by AI except for some jokes at the end, which are clearly labelled]
---
Someone I know is getting a tattoo. They’re excited. It’s a deeply significant step to them. I have no intrinsic internal reaction, but I am able to create an external reaction, for social purpose. Their excitement is what I react to. They could be installing a new patio or taking up pilates for all the affective meaning it represents to me.
But I respond appropriately. I’m able to talk about how cool it is to preserve one’s story with art indelibly inscribed on a limb. The story behind it is told to me and it makes sense. I have all sorts to say and questions to ask about it all. I’m being genuine. There’s nothing fake about it. It’s nice to share in someone’s excitement and interests. This is all despite the complete absence of personal positive or negative valence around the subject itself.
As a thing happening in the world it is similar in status to me as news of a new Taylor Swift album. I have nothing intrinsically to think or say about it, but it’s in my gift to simulate a perspective and so I do.
I do this by pattern matching and prediction. I operate like an AI agent.
This makes me part stochastic parrot.
—
‘Stochastic parrot’ is one of those terms you see used by a certain class of person in the ‘midwit’ category; that is, moderately intelligent, typically educated to degree level, who openly pontificates in ways they think of as sophisticated. Midwits tend to adopt the ‘prestige’ position on things because they are aware of which perspectives are low status in their milieu. Midwit ‘intellectual’ outputs are generic and unoriginal. Think typical Guardian reader/Bluesky user. (There are more midwits on the leftish because that’s where formally educated people politically coalesce, but there are plenty of rightoid midwits too; mostly ageing men who tend to be self-educated).
I smirk at people using phrases like ‘stochastic parrot’ because it’s like watching a wind-up toy celebrate its free will. They’re demonstrating pattern recognition and predicting how well it will reflect on them to use a phrase they don’t really understand.
Oh, you guys ... if you’re going to be all clever about these things, at least try to differentiate your output from that of a Large Language Model.
Here’s what I think is often going on in the discourse about AI now, among those who’ve added it to their list of Bad And Harmful Things To Condemn And Worry About.
Most of it is ‘slop’ complaining about ‘slop’.
I’ll clear my throat a bit first ...
I share the standard dislike of robotic writing because it’s awful in a strange kind of way, which is not necessarily so easy for everyone else to spot. Machine text flows perfectly smoothly and articulately and passes through your brain with a weirdly frictionless and fluent quality. It leaves zero wake in your mind.
Anyone who enjoys reading in depth and quality (an obviously high-status pursuit) sees this property instantly and finds it distracting from whatever content was conveyed.
This now happens most days for me. I’ll be reading what seems to be an original thought being developed to form the case for a perspective the author wants me to consider. Suddenly the ground shifts with the realisation that the person named up top didn’t write it themselves. They only had the idea. At that point I’m terminally distracted by an irritation that they assumed that I wouldn’t notice this.
That’s what it’s like to be a snob about the written word. Words and sentences are not just about what is being conveyed. The way it’s being conveyed is just as important to some of us. Language is not just instrumental for us. We want it to be affective.
I used to sneer at the writing of less articulate people, with bad spelling and grammar, but less so now that we have this much worse form.
I understand why people hate LLM output, because I hate it too.
What happens when I notice that someone used a chat agent to craft their piece is that I stop reading. Then, with my radar more finely attuned than before, I find something else to read.
This isn’t only happening on Substack. Quillette is my personal gold standard for independent original slightly right-coded perspectives and it’s creeping in there too. Soon, it feels, there will only be the New Atlantis that I can actually enjoy.
It’s a personal problem too, not just because I dislike being hoodwinked but also because current models write in ways that inconveniently mirror certain affectations in my own writing style. So I’m becoming self-conscious about using certain sentence structures or even specific words. Because I know and understand why it’s low status, it has become necessary to avoid saying things like:
It’s not just cheating; it’s quietly erasing the relationship between writer and reader.
That was me, but I’m willing to bet that your ChatGPT radar pinged. No longer dare I use that kind of phrase.
👇
As a quick aside, I hope there are linguistic experts studying this flavourless mode of expression which used to seem fine until it became ubiquitous. I’d love to understand why it grates so hard as a form. I suppose it shares a quality with muzak. Structured, inoffensive noise.
👆
I do get that it doesn’t matter to some people, but I just can’t separate form from function in an aesthetic sense.
There’s a few writers I give the impression of following here whose outputs I now mostly skip past (without unsubscribing) because their use of automation jangles my nerves.
Any bot would tell you that those people are quietly (haha!) losing readers, even if their subscriber numbers aren’t noticeably dwindling.
—
It isn’t only the writing in itself.
It’s inescapable that the language patterns of LLMs have become low-status, so that I’m now anxious about being mistaken as someone who can’t instantly see it.
I used to enjoy writing like this ...
Pausing for effect.
Building a case.
Scaffolding the argument.
Then landing the punch.
It was part of my style until suddenly there was this weird influx of articles written just like that. So now I’m self-conscious about doing that. But also I hate it. AI has oddly turned me off from aspects of my own writing style.
It’s also turned me away from a growing number of people I used to read a lot more on this platform.
My attention is finite and is better deployed to reward effortful writing. That’s how I feel about it.
I perceive it as a moral failing when a named ‘author’ didn’t really write their piece; sloth being a deadly sin and inauthenticity a personal bugbear. Even worse (in terms of my personal snootiness) ... if someone I respect recommends a piece that I immediately recognise as machine generated, they too go down a notch in my estimation.
I saw several people I follow emphatically recommending and interacting earnestly in the replies to a piece recently and all I could think was how could you have not noticed what made me stop reading, halfway into this piece that you’re so hot for?
So I know about the snobbery around all this and I share the unease that there’s a negative moral valence to it. It’s poor quality and it’s cheating.
It’s not rocket science to understand why it carries the quality of a moral failing. I occasionally blog on a kink site where there’s a nascent backlash against AI-generated content. Over there it’s less about the kind of snobbery some of us bring to Substack. The admins at Fetlife recently announced that the site now flags AI-generated pictures and will consider rolling the feature out for writing too. They really do need to. Optimising for kink clout is particularly cheap and nasty imo.
The need to flag ubiquitous fakery in that context seems especially urgent to me, because a community founded on interpersonal transparency and trust stands or falls on the authenticity of its members. At least on Substack the personal stakes are lower. But whether you’re faking your output to get laid in a non-vanilla dynamic or earn subscription revenue and intellectual clout, there’s clearly a moral component relating to trust, respect and - in a vague handwavy kind of way - truth.
AI-reliant writers? A pox on your contemptible publishing of the impersonal universal (to date, my favourite phrase to describe this stain on our reading lives).
So in this respect, I’m aligned with the room. But as for the rest of the panic, I’ve no time for it.
—
The writing is only as shit as it is (I don’t want to keep calling it ‘slop’ since that word itself is now a midwit status marker) because so much authentic human writing is ̶s̶l̶o̶p̶ ̶ shit. The LLMs aren’t special. They didn’t invent shit writing of the kind that glides frictionlessly by for the sole purpose of creating an effortless information dispersal and absorption experience. They were trained on the lowest common denominator of us !!!
I was originally going to write about a delicious irony. Which is that the sincere midwit handwringing about ‘AI slop’ is just the latest form of human-generated slop. But last week someone beat me to it. So you might as well read this, which I endorse, wishing I’d written it first.
Brian got Claude Opus 4.6 to write that and it’s better than anything I’ve seen from the people who condemn AI ‘slop’. Lol.
No, I didn’t train myself on Brian’s piece before writing the intro at the very top of this piece, about us all being stochastic parrots. It’s slightly discombobulating to keep seeing things I’ve been thinking written up by other people before I get around to it. But I’m pleased to feel much less alone in mounting a personal backlash to the AI backlash.
Here are a couple more who hit ‘publish’ while I was still reflecting on various other thoughts they turned out to have developed quicker.
JB Minton writes about how people frame their objections to generative AI as principled concern, while really being motivated by nostalgia, status anxiety and fear of economic disruption for incumbent human doers of things that AI can handle. Like me, he sees most anti-AI rhetoric as emotional signalling of allegiance to existing hierarchies.
Then there’s this delightful polemic from @edokwin.
It’s paywalled now (I recommend you take at least the 7-day free trial to read it fully) and argues that anti-AI positions in the arts are again less about principled ethics and more about preserving existing hierarchies that advantage well-resourced creators.
That one in particular chimes with my own intuitions that much of the AI moral panic - in common with a lot of confused 21st century leftishist/liberalish concerns - seems to be about preserving a particular brand of elite economic interest.
One of the objections that exposes this is that AI has been trained on stolen information and art. People who blithely say this are furious that someone else isn’t getting paid when I benefit from some original work that ChatGPT was trained on. Presumably they wouldn’t mind if I was buying thousands of books secondhand. That wouldn’t economically benefit the originator either. I can’t be bothered developing this as an argument because dispositionally I’m just a big fan of abundance, which includes information abundance.
You either want fences around stuff or you don’t.
Got to love the modern liberalish-leftish and their objection to ... er ... abundance.
---
Moral hazard
Why do I care what other people think about AI use, when I think that most of them are just ‘slop’ merchants themselves, with an obviously self-serving agenda?
It’s because repetition has a quality of making things seem true. It’s repetition that builds narrative. And when the thoughtless crowd accepts something as negative, it creates a risk of reputational problems when you diverge from accepted ‘wisdom’.
I’m a heavy user of AI agents in other domains. I wouldn’t be without them at this point.
I recently outlined an aspect of this in a Substack Note:
I’m pissed that it feels increasingly risky to mention using AI in a publishing venture like that. Every time some idiot journalist (with an obvious personal axe to grind) points out that you shouldn’t trust everything an AI says (newsflash !) it gets flattened in the public consciousness. Oh, you can’t be trusting that AI because it gets things wrong.
The moral hazard I’m now sensing from media flattening of the AI discourse is because people have donated almost 3,000€ to support my 24th Cavalry Reconnaissance Squadron commemoration project. I’m transparent on www.24thcavrecon.org about using AI tools for document transcription and analysis and I’m now anxious that a relentlessly - and misleadingly - negative narrative about AI use might deter future supporters.
I may be overthinking this, but here’s the dilemma.
I want transparency around AI use. It’s the obvious solution. But the anti-AI narrative is making transparency feel risky. I know from personal experience that ‘normies’ are generally suspicious of chatbots. Talk to your neighbours and you’ll soon see that they are developing a sense of personal AI tools as something vaguely unethical, exploitative and often fraudulent. This means that transparency brings unwelcome exposure to distorted and flattened perceptions.
There’s an inconvenient asymmetry about this. Because so many people are fucking with their audiences and idiot lazy professionals in all domains are deservedly panned in the media for mindlessly regurgitating hallucinatory outputs, saying ‘I use AI’ is becoming risky. If I got my way everyone would be transparent about their use of these tools but the louder the (mostly ill-founded) moral panic, the higher the risk of punishment for transparency.
This is ultimately why I scorn the moral panic over AI. It is achieving exactly the opposite of what the shouters think because it’s introducing an incentive to NOT be transparent.
I’ll be sticking to my guns and using agentic chatbots and other tools because I can trust myself to keep the necessary guardrails of a good old-fashioned wet brain in place to ensure minimal bullshit reaches my audiences. And I’ll be transparent about it.
But, while I’m at it, I’ll also call out the alarmist framings of the ignorant midwit normies who are saturating this conversation with their slop
Because, you know what, guys? You really aren’t helping anyone.
As for writing, I might not be the best writer in the world but I’m too narcissistic about it to ever outsource Rarely Certain to the bots.
You’re stuck with the real me around here.
Postscript:
Me: Can you produce a few jokes in the classic British comedian style that begins
‘A stochastic parrot goes into a bar...’
ChatGPT 5.2 :
A stochastic parrot goes into a bar.
Orders whatever the last three customers had, statistically weighted by popularity.
The barman says, “Do you actually want that?”
The parrot replies, “Want is an emergent property of token probability.”
A stochastic parrot goes into a bar.
Says, “I’d like a pint of lager, a gin and tonic, and a brief summary of Wittgenstein.”
The barman says, “You can’t have all three.”
The parrot says, “Based on my training data, that combination performs well.”
A stochastic parrot goes into a bar.
The landlord says, “We don’t serve your kind in here.”
The parrot replies, “That response aligns with 72% of public house exclusion narratives. Would you like me to escalate?”
A stochastic parrot goes into a bar.
It starts repeating the last thing everyone says.
A bloke at the end mutters, “Bloody typical.”
The parrot nods and says, “Bloody typical.”
The bloke sighs, “Fair enough.”
A stochastic parrot goes into a bar.
After twenty minutes of perfectly fluent small talk, someone asks, “But do you understand any of this?”
The parrot says, “Define understand.”
The philosopher in the corner leaves.
If you’d like them drier, more absurdist, or closer to old-school pub circuit sarcasm, I can tune the register.






I appreciate the author’s putting into words the LLM radar I start to feel when reading some articles.
Its so stupid, because moat people vastly overestimate their ability to identify AI material, especially in shorter passages.
Mark Lawrence has done some (very unscientific) tests regarding this, and even when you're selecting for the type of person who reads an author's blog, the results were basically a coin toss.
https://mark---lawrence.blogspot.com/2025/08/the-ai-vs-authors-results-part-2.html?m=1
Most of the linguistic fingerprints that people use to try and identify AI are not inherent properties of LLMs, but instead due to the default system prompt (e.g. "its not just __, but ___________"). These sort of writing tics are trivially easy to prompt around.