[This is a speculative piece. I'm neither competent nor minded to 'prove' what I sense is going on]
AI is vexing people. Quelle surprise.
Like every new technology, AI is an opportunity for many and a threat to some. That's why you always get evangelists and complainers, as people interrogate their internal reactions to new ways of doing things, then project them.
It's only been a few weeks but I'm seeing no reason to update my current intuitions on all the kerfuffle about people using AI to generate 'content'.
Dead Substack Theory is coming ...
In case you didn't know already, a popular band on Spotify is confirmed to be synthetic. The Velvet Sundown was described as a 'provocation' by its creator/s and people have certainly been annoyโฆ
TL;DR - I said that it's up to everyone what they read or listen to, or how they produce things that people like. It's not really any of our business. We all have agency.
But, if anything, I'm rapidly growing tired of the handwringing. This is probably the issue on which I'm furthest out of step with 'the room', given that many people I normally pay a great deal of attention to seem to be really worried about AI in the creative context. I think it's the beginning of a new moral panic, disguised as something else.
The birth of Authenticity Panic
The first album I bought was Queen's A Night At The Opera. The sleeve featured a disclaimer 'no synthesisers'. You could see this in one light as highlighting how Brian May orchestrated layers of guitar parts, so that they often sounded non-guitarly. But equally it worked as an assurance of authenticity. We aren't cheating when we come up with these cool sounds because we're real musicians, not just knob-twiddlers. That's how teenage me interpreted it.
I'm sensing a growing authenticity panic now, with the drumbeat of anti-AI sentiment which seems more about feelings and fears than facts.
Eventually Queen embraced the beauty of tuning and modulating oscillators with precisely zero loss to their musical credibility and that's what will happen here, when everyone adjusts to the new reality of another technological gift to humanity. But it will take a while, because feelings are stubborn and we're in for a tiresome period of people pointing out all the ways that Large Language Models are fallible as a reason to reject them entirely.
It seems absurd to me that new tools lead to moral catastrophising in the way weโre seeing at this point. But every technology receives a backlash when those who are skilled in certain analogue practices are suddenly joined by ordinary people achieving pretty good results with the help of new tech. It was the same with digital photography.
I feel safe in predicting that disclaimers such as 'No AI was used in this article' will come to seem as quaint as Queen's 'no synthesisers' assurance.
But at the moment it's hard to resist pointing out that you aren't using AI to form your prose. Only yesterday, on an unrelated site, I saw a good writer accused of using AI when I have seen no signs of it in her output. She just writes with a certain clarity and cadence that AI typically emulates. I also like to write in a certain rhythm and cadence and I'm sure some people will therefore think that I'm either creating or editing my pieces with the assistance of an LLM. The urge to point out that I'm too egotistical to outsource my words like that is strong, such is the clamour to make AI use carry reputational cost.
What's going on at the root of this panic? Some speculations.
I'm thinking that there's a few psychological mechanisms in play and lots of rationalising to justify them.
Top of the list is status anxiety. This is understandable. There's every reason to fear that your cultural or professional value is eroded when supply of your particular talents suddenly expands. Now that some possibly untalented writers are topping the Substack bestseller charts with the help of Claude, Gemini, ChatGPT or whatever, this is hard evidence of AI's effect. It's too bad that people like those outputs and overlook the rest of us, but they just do and that's tough.
I suspect a certain amount of projection is also going on. Hardly any of us gets the paying audience we'd like and that's bound to result in personal dissatisfaction and insecurity. Now we're seeing certain accounts catapulted to prominence I suspect a whole lot of this negative affect is conveniently displaced onto a symbolic safe target. AI provides that target. The Substack struggle is real and often infuriating, so lashing out at machines for helping others slurp up precious attention is a disguised way of distracting from the fact that we just feel under-appreciated.
There's also the tendency sometimes labelled control bias, which seems to lie at the heart of a lot of moral calibration around things outside of our influence. We see a perceived injustice and immediately jump to the assumption that we must control whatever forces are behind it. It's distressing when you find that this is often impossible and probably partly accounts for why liberals in particular suffer from such poor mental health.
Social incentives are also commercial incentives on Substack
You can't escape the incentives that come with connection to others and all communities are rife with these. That's why we see so much virtue signalling in any domain. I've done plenty of it too, especially in my Twitter days when the incentives ended up financial as well as reputational.
Seen least kindly, virtue signalling is just a performance. It's a means of showing that you belong to a group or that your public identity is authentic. Implicit in all the grumbling about AI is a statement that you are uncontaminated by this new thing and therefore belong in the subset of authentic writers.
There's also good old outrage. It's our catnip. Platforms are built on amplifying and encouraging it. It's literally how swathes of the social web work and Substack is no different. Many people have figured out that the outrage potential around highlighting the most stupid manifestations of Wokeism or Trumpism is running out. So they've moved on to 'genocide' in Gaza and the evils of using machines that often get things wrong and produce bloodless text with no sincerity. Anti-AI polemics are doing well right now, so there's a strong incentive to flatten the issue and join in. To read a lot of the rhetoric you'd think the world is divided neatly between 'AI-bro' evangelism or a doom cult prophesying catastrophic sidelining of human creativity. People do this with anything related to our information environment.
The usually excellent blogger Ted Gioia played a blinder of this ลuvre with a wild prediction that we are seeing the collapse of the knowledge system. People loved it. It really got them going. That piece had 2,943 likes, 520 comments and 665 shares at the time I'm writing this. But I have my doubts that the knowledge system (whatever that is) is doing anything but expanding in the way that it always does. I think Ted's making a living from reader attention and he knows that outrage and anxiety sells. Everyone who loved his piece enjoyed feeling agitated about it. If we're honest, we all do. I just enjoy it less, since noticing what's going on under the surface of my daily reading pleasures.
Identity and moral tribalism obviously weaves into all of this too. Identity typically coalesces around a feeling that we're on 'the right side' of certain issues and AI is a useful bogeyman around which to centre your self-image as a purist of some kind. I play along with it myself, to some extent, by constantly stressing that all my words are my own.
There's also a lot of ego in play, when it comes to choosing our standpoints on politics, ethics and cultural critique. Freddie deBoer occasionally nails this right down when he observes performative leftishism on issues such as Substack's 'Nazi problem'.
flattering the reader into thinking theyโre part of a vanguard - โyou and I, together, will starve the brownshirts of oxygenโฆ once weโve processed your monthly payment.โ Itโs a cynical symbiosis between the writer and the audience, both addicted to the dopamine drip of moral superiority. Youโre not fighting Nazis, youโre buying artisanal outrage.
Freddie deBoer in this searing polemic
Cultural cycles never stop
Everything goes in cycles. When the printing press was invented there was a terrible kerfuffle around how it would erode authority, expose people to ideas they wouldn't (and shouldn't) otherwise have entertained and cause terrible things to happen. Which of course it did. But it also led to massively increased literacy and improved education. When video games reached a certain standard there were dire warnings of players becoming unable to distinguish their fantasy worlds from reality. At worst, callow youths would confuse their local shopping centres for a battlefield and forget that Call of Duty was just a game. It didn't happen. Some people stopped dating girls and got very good with their hand-eye coordination. Others found lots of joy in it all. Cycles of tech panic are nothing new. There was even an outcry when the cheaply available novel became popular. Something to do with women in particular having their pretty little heads filled with fantasy and being distracted from real life. Lol.
Neurosis and gatekeeping
When more and more acts or things are pulled into the sphere of moral judgement you can't help but notice neurosis at the heart of it all. There are moral intuitions and there are also moral neuroses, it seems to me. These are where the very existence of a different perspective is upsetting to you. Again, I've had plenty of personal experience here. Being upset in some way that people can't see the world as I do. This has been most alarmingly displayed in American progressive circles, with embroidery, cooking and children's book communities online torn apart by unhinged attacks on a few by clearly neurotic people who discern injustice and impure thought in the minds of others.
Taste surveillance is another impulse that comes from a less noble place than people who would seek to influence others presumably notice in themselves. In the case of AI-generated artistic material, cultural policing of what others produce or consume, under the guise of protecting the commons sounds nicely high-minded. But it's also the impulse of the busybody for whom their own way of being is simply not enough to distract them from the pain of living. But gatekeeping is not a noble impulse. In my book it just signals an absence of humility.
I made a much longer list of factors influencing this panic, but you'll either have switched off in disgust by now or nodded along and want the conclusion. So here it is.
If it walks like a cope and quacks like a cope, it's probably a cope
Many of us would like to be conventionally desirable and the depiction of beautiful men or women forces us to confront the fact that we're - at best - mid. Feminism came up with an answer to this frustration by proclaiming the mass depiction of hotness as unjust. Because life isn't fair we all have to cope with lots of disappointment and one of the copes provided by feminism is that all those hot, aesthetically or sexually inspiring faces and bodies in the adverts are harmful to women and girls. They are an injustice. This was such a successful rhetorical move that it would be social suicide to suggest that feminist condemnation of depicting typically preferable body types or faces works as a cope for those without such qualities, if you dared suggest so at a proverbial dinner party.
There's no doubt that we're being rendered less significant in some way by AI, in several areas. People will soon have many of their romantic and friendly liaisons with probabilistic language algorithms sporting appealing avatars. Lots of work will no longer require human input. And an increasing amount of creative content will never emerge from a wet brain, while still delighting consumers. Insisting that all this is a terrible thing is comforting. That the thing that is incrementally rendering you obsolete in various domains is wrong and unjust, in some moral sense.
None of this is to disparage those who are campaigning against it, with sincere articles about the difference between human and machine life or ideas for badges on your work as a quality assurance of creative purity. It's just that I'm not convinced that it's worth all this attention. If we are to accept that everyone has agency and the ability to make their own choices, then why this intolerance of plurality when it comes to how things are produced? It only makes sense to me as an expression of anxiety over a new kind of competition. This is rational because we're already seeing machine-written top-selling Substacks with earnings that most of us couldn't imagine. But the anxiety around this is now disguised as a moral intervention. That is the cope. Realising that you're being replaced, you could write about how you feel about that and what you might do to accentuate your human originality. You could let others enjoy their machine life without defaulting to moral judgement.
You can't kid a kidder and I only suspect that all of this handwringing over yet another cultural crisis is a cope because it drives me nuts, too, when I see how popular some of this tepid, faux profound output has become - and not just on Substack. But there's also dignity at stake and complaining about currents beyond your control quickly becomes tiresome from the outside. Which is where I am.
Just as I did with obviously neurotic feelings about politics, I trust this explosion of negative feelings about agentic AI less and less. Looking inward instead of outward, to identify what was really going on, was liberating. Permitting an emergent broad scepticism led to improved steady-state wellbeing. Spotting the self-aggrandisement behind believing that my interpretation of the world made me superior was a calming step.
I'm still an elitist nerd in domains like music or literature and I still think that humans are the best in creative fields because intention and meaning are primordial qualities in those worlds. But many people will never make contact with those qualities in writing, music, art and other creative endeavours and it seems, to me, to be undignified to constantly complain about that.
Itโs ages since anyone answered the call to
I use AI often and I find it useful. I type quickly, I read quickly, I write for a living. AI takes away the repetitive work and leaves me to do the part that matters; the editing and shaping.
The people most threatened are probably those in the middle range of expertise in any field. Not the experts, not the beginners, but the competent middle tier whose work can be replaced. Iโd include myself there, particularly law. If AI can do it better, then so be it, itโs my job to pivotโฆ.