I hadn’t anticipated slipping into Scooby-Doo speak for portions of this week’s newsletter, but the numbers suggest I should. Three weeks back, I published a column on ‘ronspiracy rheories,’ and since then, my analytics have dropped across the board. Substack, Instagram and the whole lot have gone dark after trending upward for weeks. For example, Aug. 31’s newsletter had a 92% open rate. Since publishing the ronspiracy rheory piece, open rates have been sub 80, despite gaining subscribers.
Ironically, people liked the ronspiracy rheory column. But the algorithm? Not so much.
Or perhaps ironically is the wrong word to use; predicably is more fitting. A part of me knew the risks of writing that column. Having made a living off journalism and communications, I know all too well what does and doesn’t anger the algorithmic gods. I was in newspapers in 2018 when Facebook made an algorithm change that all but entirely scrubbed news sites from people’s feeds. One night we were there; the next, we were gone.
Facebook’s change had the “unintended consequence” of scrubbing legitimate news sites from people’s feeds in an attempt to decrease clickbait – a response to the then-brewing Cambridge Analytica scandal. However, in recent years the algorithms have been specifically tweaked to shadow-ban content categorized as promoting ronspiracy rheories. Despite refuting ronspiracy rheories in my column, the mere mention of the words was enough to invoke algorithmic wrath.
Yet, despite these attempts to curtail ronspiracy rheories, the subject flourishes online. The issue is so bad that Poland’s foreign minister Radoslaw Sikorski recently called for algorithms to be banned if they can’t do something about the spread of such misinformation.
“The role of traditional media is not to repeat every conspiracy and every piece of nonsense that you hear but to filter and to grade, this is important,” he said. “If social media that makes billions of dollars every annum from their activities don’t take on the duties that media also have in society then I think we should regulate them, just as we regulated the press, the radio and the TV when they were first invented.”
You might think this would have folks in Silicon Valley saying “ruh-roh,” but it’s unlikely.
It’s a Feature, Not a Bug
While companies have made moves to wrangle their algorithms and placate sentiments like Sikorski’s, hoping to avoid a Facebook-like $5 billion penalty, they’re still driven by capital. If a subject makes money, they’ll never truly ban it, just make it a little harder to find à la a shadow ban – and algorithms are big business.
Meta, formerly Facebook, made $134 billion last year; ByteDance, owners of TikTok, made $120 billion; and Alphabet, Google’s parent company, raked in a whopping $307.3 billion.
What’s frightening is that few know how the algorithms behind these companies’ success work – fewer still understand why.
Computer scientist and Silicon Valley godfather Jaron Lanier details in his 2018 book Ten Arguments for Deleting Your Social Media Accounts Right Now how tuning algorithms is more like scrying than science.
“The algorithms are rarely interrogated, least of all by external or independent scientists, in part because it’s hard to understand why they work. They improve automatically, through feedback. One of the secrets of present-day Silicon Valley is that some people seem to be better than others at getting machine learning schemes to work, and no one understands why. The most mechanistic method of manipulating human behavior turns out to be a surprisingly intuitive art. Those who are good at massaging the latest algorithms become stars and earn spectacular salaries,” Lanier wrote.
The way that these algorithms work is also intensely guarded. Lanier notes that while all manner of government secrets from the likes of the NSA to the CIA have leaked, you’ll never find the code for Google’s algorithm out in the wild. Lanier says this is because of just how damning it would be for the general public to learn what makes algorithms tick.
“… if everyone could see how present-day artificial intelligence and other revered cloud programs really worked, they would be alarmed. They’d realize how arbitrary the results can sometimes be. The algorithms are only fractionally, statistically useful, and yet that thinnest thread of utility has built the greatest fortunes of our time,” Lanier said.

Part of this flimsiness is that algorithms are predicated on randomness. To keep things exciting and engaging for people, the algorithm occasionally throws a curve ball to see if it’ll land. If it does, great! If not, no big deal – you’ll just scroll past. However, the human brain picks up on this randomness, and it has similar addictive effects as gambling. Will the next thing you see be something you like? Something you won’t? Maybe it’ll be something altogether new or grotesque? You’ll never know unless you keep scrolling.
“It’s as if your brain, a born pattern finder, can’t resist the challenge. ‘There must be some additional trick to it,’ murmurs your obsessive brain. You keep on pleasing, hoping that a deeper pattern will reveal itself, even though there’s nothing but bottomless randomness,” Lanier said. “The algorithm is trying to capture the perfect parameters for manipulating a brain, while the brain, in order to seek out deeper meaning, is changing in response to the algorithm’s experiments; it’s a cat-and-mouse game based on pure math.”
Randomness is likely why it’s so easy to get sucked into ronspiracy rheories rabbit holes despite the attempts to relegate them. Outside of an outright ban, they’ll always be there when the algorithm’s wheel eventually lands on their number.
“No one will necessarily ever know why those particular posts had an effect on you, and you will probably not even notice that a particular post made you a little sad, or that you were being manipulated. The effect is subtle, but cumulative,” Lanier said.
The New Basilisk
Roko’s Basilisk is an infamous thought experiment that suggests that in the future, an advanced artificial intelligence could emerge that is so powerful and goal-oriented that it would create a simulation to torture those who did not help bring about its creation. This would incentivize people in the present to work towards the AI’s development, lest they be punished in the future.
The thought experiment created a stir when it was first put forth in 2010, but today, it is seen as little more than a technological repurposing of Pascal’s wager. However, the basilisk of the algorithm is quite real, and we’re all trying to avoid punishment each time we log on – punishment ranging from poor feeds full of crap content to copyright strikes and bans.
The basilisk of the algorithm is one of words. My ronspiracy rheory example is just one of many examples of people needing to twist and soften their language to appease the creature. “Unalive” and “seggs” are two common stand-ins for discussing suicide and sex, as the basilisk is bothered by each. This monster has PG sensibilities.
Language growing softer and less direct isn’t a new phenomenon resulting from algorithmic appeasement. George Carlin riffed on the trend years ago. Uncomfortable language makes people uncomfortable, so they find a way to soften it. “Smug, greedy, well-fed white people have invented a language to conceal their sins,” quipped Carlin.
But to avoid bans and censorships dealt out by the hand of the basilisk has taken this softening to strange new places. In fact, softening isn’t even the right word for what’s happening to words – it’s more like babification.
Unalive is particularly offensive. It’s like DoggoLingo and a technical manual fucked, resulting in the birth of a word equal parts sterile and absurd. It’s the kind of word a character in a British sitcom would have uttered decades ago for laughs but is used today to speak seriously about a serious subject. People have rightly pushed back as it’s crossed from the digital to the real world.

What’s insidious about the basilisk-enforced softening of language is that it’s not a natural progression of society’s sensibilities. The basilisk has a master directing it – advertising.
Advertisers are very much put off by hard subjects and touchy language. They enjoy a clean, inoffensive space in which to hock their cleaning products and fast food delivery services. Talk of sex might offend the soccer mom in need of a new bottle of detergent, and she’ll log off before discovering Gain has engineered a new fresh scent. So, platforms employ the basilisk.
We’re actually able to see in real time what happens to a platform that kills its basilisk. Self-proclaimed savior of free speech Elon Musk purchased Twitter for $44 billion in 2022. Under his direction, the social media powerhouse has curtailed censorship, allowing hate speech and schizophrenic shitposting to flourish.
The result? Today, Twitter is worth 80% less than what Musk bought it for. The reason is that advertisers have abandoned the platform in droves. CNN reported: “A recent global survey by Kantar found that a net 26% of marketers plan to decrease their spending on X next year, the steepest pullback from any major global ad platform. Just 4% of advertisers said they think X ads provide ‘brand safety’ (certainty that their ads won’t appear near extreme content), compared with 39% at Google.”
No basilisk means no ads, and no ads equals no money.
Of course, this also demonstrates that the basilisk is not an inherently negative thing. Stopping hate speech is a positive, and while the merits of censorship can be debated, no one in their right mind would say that what’s happened to Twitter is good.
The downside is the basilisk is indiscriminate; all it knows is that a word or phrase has been deemed bad and is to be dealt with when detected. It can’t differentiate between someone promoting ronspiracy rheories and someone trying to push back against them, resulting in each getting the shaft.
However, the people who genuinely promote the ideas have the advantage. They can twist and reshape their words while still conveying the meaning and intent that the basilisk would otherwise shut down. In contrast, the critic has to use hard, direct language to effectively tackle the ideas, thus invoking the algorithm’s wrath. It’s a no-win scenario. Twitter is only suffering because of how openly brazen people are now allowed to be on the platform.
The Limits of Our World
The world is a hard and difficult place, and hard language gives us the tools to address it adequately. Our language should be softened as a result of softening the world, making it an easier place to exist, not bent by algorithms tweaked to simultaneously hack our brains while presenting sterilized, advertiser-friendly spaces.
We see more and more each day the damage these digital spaces and the individualized worlds their basilisks curate for each of us do. As Lanier writes, large swatches of our daily experiences are being curated by faraway algorithms.
“Algorithms choose what each person experiences through their devices. This component might be called a feed, a recommendation engine, or personalization,” Lanier said. “The immediate motivation is to deliver stimuli for individualized behavior modification. [The algorithm] makes it harder to understand why others think and act the way they do.”

It’s also beginning to rob us of the words we need to properly address the challenges being created. Philosopher Ludwig Wittgenstein wrote in 1922, “The limits of my language mean the limits of my world.”
An algorithm tuned to support monied interests will never curate a world with the tools to tackle the problems said interests create. And the farther down the algorithmic rabbit hole society falls, the harder it will be to reclaim them.
Once unalive replaces suicide as common lexicon, it’s softened forever. People won’t blow their brains out because they are facing homelessness, but choose to unalive themselves due to financial hardship and housing scarcity.
If one of those scenarios happened to your neighbor, which one paints a clearer picture in your mind, making you want to enact positive change in your community?
Being uncomfortable isn’t bad – it spurs us to create comfort. Language created by algorithmic basilisks only creates comfort for advertisers and corporate boardrooms while we desperately need real-world, everyday solace. That’ll never happen if we lose the ability to address hard reality.
As Lanier writes, “You and they can’t build unmolested commonality unless the phones are put away.



![[Unlocked] Exploring “The Goblin Universe,” a Paranormal Classic That Holds Up](https://driftlesstimesmedia.com/wp-content/uploads/2025/08/cover-cropped.png)

Leave a Reply