From corn to eggplants, the flavour of Algospeak
Most of us post on social media platforms, even if our contributions are confined to holiday snaps and amusing videos of our pets.
But that content may never be seen if it touches on controversial subjects. While anyone can comment on pretty much anything online, all platforms feature content moderation systems. Those systems can render content invisible and without informing the user who posted it.
Unfortunately, you don’t need to write war-mongering posts or make racist observations to fall foul of content moderation systems. They contain algorithms that detect certain words deemed indicative of problematic posts including any that may upset advertisers on the various platforms.
Content moderation varies from digital platform to social platform. However, using everyday words such as death, lesbian, corn and cheese pizza could see your posts censored.
This situation has given rise to a new form of language – Algospeak
What is algospeak?
Algospeak, or algorithm speak, is the practice of using code words for vocabulary that could be picked up by moderation systems. The use of emojis to convey meanings other than those originally intended is also common. For instance, writers seeking to evade the dreaded algorithms might use “corn” or a corn emoji 🌽 instead of the word porn. The eggplant emoji (🍆) has proved to be a good substitute for the word *enis.
A brief history of Algospeak
Of course, algospeak is essentially a nonsense language and that’s nothing new. It is thought that nonsense verse first appeared as early as the 8th century. It features a carefully conceived combination of rational and irrational language that tends to appeal to children. Many classic nursery rhymes such as Hey Diddle Diddle, first published in 1765, feature nonsense verse:
“Hey diddle diddle,
The Cat and the fiddle,
The Cow jumped over the moon,
The little Dog laughed to see such sport,
And the Dish ran away with the Spoon”
The Faulty Bagnose
During the 19th century, such literary luminaries Lewis Carroll and Edward Lear wrote nonsense poems that continue to enchant youngsters to this day. Fast forward to the 20th century and celebrated writers were continuing to produce bamboozling nonsense:
The Mungle pilgriffs far awoy
Religeorge too thee worled.
Sam fells on the waysock-side
And somforbe on a gurled,
With all her faulty bagnose!
This fascinating piece of nonsense, The Faulty Bagnose, was penned by none other than John Lennon.
Those living in repressive regimes have long since been tailoring their language, using code words when discussing taboo or banned topics.
Nonsense speech and code words are certainly firmly rooted in our collective consciousness. Little wonder that online writers and translators have turned to a form of nonsense and code to evade censorship.
Ouid or oui’d?
The practice of replacing censored words and phrases has existed online since the mid-2000s and appears to have originated in the worlds of gaming and chat rooms. The word “unalive” began to be used in place of terms such as dead, kill and murder. “Ouid” and “oui’d” were utilised to represent “weed” (cannabis).
In 2018, Emily van der Nagel, a Lecturer in social media at Monash University, published an article in which she highlighted that content creators were using certain words and phrases in place of common terminology and names to deliberately starve what she described as “trash celebrities” of attention. She named this practice “Voldermorting”.
The Covid-19 pandemic then accelerated the trend for people to express themselves online. Many pandemic deniers began spreading misinformation on social media platforms and the algorithms content moderation rules were adapted to identify their posts. Users started referring to the Covid-19 related content as the “panini” or the “panda express”. Anti-vaccine groups on Facebook changed their names to “dance party” or “dinner party”, people who were vaccinated were known as “swimmers“.
A new online lexicon has quickly emerged, and it is in a constant state of flux as content writers strive to remain one step ahead of the algorithms. It is estimated that up to a third of those who post online now use algospeak.
The implications of moderation systems
Moderation systems are essential for filtering out hateful content and misinformation. Unfortunately, they are also designed to censor material that big brand advertisers might deem incompatible with their carefully crafted images.
Money talks and so content moderation system algorithms have been adapted to identify and suppress content featuring important discussions that should be heard. Minorities and marginalised communities are effectively being silenced.
It turns out that there is no such thing as freedom of speech online. You can indeed say whatever you wish but big brother is watching you. Discuss certain subjects online without using algospeak and you will find that nobody can hear you.
It is only right that any comments that could inspire suicide or terrorism are taken down or rendered invisible. But content moderation filters are also censoring content aimed at helping those suffering from eating disorders and other mental health issues. LGBTQ content creators are seeing their work banished to the online backwaters simply because they have used the words “gay” and “lesbian”. There is evidence that even conversations about pregnancy are downgraded on TikTok.
Can algospeak beat the algorithyms?
The marginalising of certain groups via moderation systems seems particularly unfair as most experts agree that it is pointless. Content creators and translators are skilled at remaining ahead of the game. Many algospeak terms are used so frequently that they become ubiquitous and easily identifiable by the algorithms. But algospeak is a fluid language that is continually being adapted to beat the bots. Today’s eggplant is tomorrow’s carrot, so to speak. In addition, even the most advanced AI systems struggle with sarcasm and slang. Algospeak will continue to triumph over the technology.
There is rapidly rising concern regarding moderation systems as they clearly don’t work but can cause significant collateral damage. Everyday language is being transformed as clever content creators are forced to perform verbal gymnastics to prevent their work from being suppressed. Innocent discussions have become collateral damage in the tech giants’ war on words.
Social media platforms are under enormous pressure to remove harmful or hateful content. That pressure is being exerted by both advertisers and society generally. However, algorithms that merely seek out words that could be indicative of problematic content are blunt instruments and they are easily circumvented. Hateful content and fake news continue to proliferate while helpful content crafted by those less skilled in algospeak is silently magicked away.
You can beat the algorithms but only if you are fluent in algospeak.