̶ ̶A̶r̶t̶i̶f̶i̶c̶i̶a̶l̶ ̶I̶n̶t̶e̶l̶l̶i̶g̶e̶n̶c̶e̶ ̶ Intelligent Artifice and Black German Nazis for social justice
Plus a confessional on not caring too much about things that the moral gnostic liberal hegemony demands
[Housekeeping: it seems clunky to continue interspersing stories of WWII research with the usual Rarely Certain themes. A separate section for that work is one potential solution, with an opt out for those who aren't interested. But right now there is a Facebook page detailing what's surfacing in research into the 24th Cavalry Reconnaissance Squadron of the US 4th Cavalry Group. Go and follow it, if you like that sort of thing]
---
We (as a species) are supposedly creating humanity's ultimate knowledge tool, quaintly known as Artificial Intelligence.
I'm more inclined to call it IA at this point.
Intelligent Artifice.
Well-meaning engineers at Google wanted their most advanced image generation tool to be as fair as possible and to not reinforce cultural stereotypes.
So it started creating images of black German Nazis.
People had fun playing with this, discovering even more silliness.
For example, it refused to generate an image of Isaac Newton, because he wasn't female.
And only white people could be portrayed enjoying fried chicken, because black people eating fried chicken is a stereotype. (Duh - what’s wrong with stereotypes anyway, when they’re often true?)
I'm not really interested in chewing over the rights and wrongs of this. I'm more interested in watching the efforts of a minority of 'do-gooders' to mould society in supposedly fairer ways; an increasingly determined exercise of power.
Those who do want to understand the ins and outs of 'woke' image generation (or 'kind' and 'nice' Large Language Models) can follow the always-useful summaries of developments by Zvi Mowshowitz. If you somehow missed this story, maybe start with this one, providing the background to this particular contretemps.
Also, thoughtful responses from Ian Leslie here, Frederick deBoer here, Nate Silver here.
What's interesting to me is how people get either angry or defensive about this kind of thing.
The angry people make questionable claims, like Google hates white people and men.
The defensive people make questionable claims, like Google only means well.
I don't buy either of those interpretations. And, although I'm tired of people being angry and overstating the perilous implications of stupid things all the time, more of my scepticism is reserved for the argument that weighting your AI to be manifestly dishonest - let alone ridiculous - is a well-intentioned move.
Apologies for deploying the O name, as everyone does in moments like this, but Orwell did eerily predict precisely this form of sociocultural engineering, while some of the world's most hideous zealots actually practised it.
The argument that this is a well-intentioned attempt to move the dial toward a fairer and more just world only works if you'd also be prepared to acknowledge that entirely ignoring non-caucasians and women in the outputs of a graphic or text generating machine were also fine provided the intention was noble.
Saying that a tech giant 'means well' by weighting its educational output according to an ideologically-coded perspective is just another way of saying that it is trying to skew everyone's outlook toward one that you hold too.
It did the wrong thing for the right reasons.
At this point you can only really move this argument forward by settling the ancient question of whether intentions or consequences matter most. Good luck with that. Read the entertaining Bentham's Bulldog for that kind of philosophical depth.
The dullest consequence of Google's folly is that it nudges us toward a stupid culture war between competing AIs, because some bright sparks will inevitably weight theirs against 'woke bias' and Hitler will start appearing in lists of great philanthropists.
The Iron Law of all this is that everything tends towards tiresome.
I kind of want to be annoyed about it all, but I just feel weary and disengaged, while rolling my eyes at relentless attempts by people to shape the way the rest of us perceive the world. Narcissism, solipsism and craving for influence is all I see.
It's been an interesting manifestation of leftishist accelerationism in action, though.
What better way to bring about the dissolution of shared community than by replacing the actual past with one you prefer would have happened.
---
Getting mad when one's personal model of fairness is compromised out there in the world is one of those things we tend to accept too willingly.
Why do we get mad?
I can only speak for myself, as someone who used to get mad and now doesn't. The rest of this piece will be confessional and speculative on some reasons for that, based on personal experience ...
The correlation between having passionately liberal views and poor mental health seems to be reasonably well established.
My own experience seems to jibe with this.
A big part of why I quit Twitter was an intuition that I was experiencing a cacophony of deeply distressed people and their neuroticism was rubbing off on me.
They saw me slipping and it upset them a lot. One of the most instructive moments was when I said I’d wished Jacob Blake had just done as he was told and people went apeshit.
The kind of 'injustices' out there in the world rarely affected me personally, but until this time my sense was that it was important to 'stand for' or 'stand against' certain things.
What I came to think was that this was a form of narcissistic neuroticism.
The sense I had was that I was among a select group who knew how to think. And that this somehow obliged me to influence the world with my goodness. Otherwise we were all in peril.
My 'values' simply had more valid moral valence than those of others, who thought and felt differently.
It took entering and emerging from a genuinely tough personal time for this to be revealed for what it really was.
Not everyone wants to post public comments and sometimes people email me instead. Now Substack has introduced this button, so you can use that too, to complain or tell me how brilliant Rarely Certain is.
I've often wondered why my shift from left to ... well ... independent and scornful of all ideology ... happened in the way it did.
For a long time I couldn't figure out whether a better sense of subjective wellbeing caused or resulted from jettisoning moral certitude (which is all that ideology is).
Here's what was going on …
Keep reading with a 7-day free trial
Subscribe to Rarely Certain to keep reading this post and get 7 days of free access to the full post archives.