Thanks to machines, humans are trained to change the way we speak
Censoring algorithms no match for human improvisation
The technologies we use have long changed how we communicate, but social media has introduced a new hurdle: automated content moderation.
Algorithmic monitoring aims to catch harmful terms that might spread disinformation, racist language, or violent extremism. But without the capacity to detect nuance and distinguish between contexts, speech that shouldn't be flagged is often caught in the censoring crossfire.
Today's content creators have found a way to elude the digital censor through the use of "algospeak" — the process of replacing words with euphemisms that work as stand-ins for the original flagged content. Some common examples include using the word "unalive" instead of "dead," "seggs" instead of "sex," or "panini" instead of "pandemic."
As tech reporter Taylor Lorenz explained in her Washington Post article, this word-swapping process has helped content creators fool algorithms and keep their content from being down-ranked or removed on platforms like TikTok.
"It's endearing and earnest to me to realize that people want to talk about things and it doesn't matter what the machine says, they're going to talk about them anyway," Jamie Cohen, an assistant professor at CUNY Queens College in the Media Studies department, told Spark host Nora Young.
"Now, the downside here is that algorithms like TikTok are going to learn the workarounds very simply, and then what's going to happen is we're going to have to evade them further. And eventually language will be disguised in a way that will be somewhat unrecognizable to a reader."
Cohen raised concerns about what these changes could lead to over time.
"If we're trained to allow machines to teach us to make new language, then we're also being trained to allow other systems to do that as well," he said.
"So by accident, we're being trained in forums or functions that are outside of social media that we're not aware of, maybe, at this moment. It may only take one really charismatic authoritarian leader to run a similar system in real life, to change our language."
Cohen says the next hurdle will be determining whether an algorithm can learn to detect nuance. "These machines aren't designed for good conversations," he said, "they're designed for advertising, they're designed to sell us product."
We don't really have to worry about computers taking over language until we see them playing charades.- Morten Christansen
This capacity to engage in conversation is part of what makes human language so unique, according to Morten Christiansen, a cognitive scientist and the coauthor of the book The Language Game: How Improvisation Created Language and Changed the World.
"Language is like a game of charades, where what we're trying to do is to improvise, to provide clues to each other to get our ideas across, what we want to say," said Christiansen.
These language games are something Christiansen says computers and AI systems are not capable of replicating just yet. "What they're relying on is taking little bits and pieces, putting them together in a way that makes it seem like it's true human language, but they're not really interacting with one another," he said, "it's more like they're essentially engaging in monologue.
"It's very important for us to view language as dialogue rather than monologue. And in a sense, these bots or AI language systems, they're really just playing monologue. And that's a major limitation, which means that they can't really go beyond that."
It's the distinctly human capacity for spontaneous, collaborative communication that gives Christiansen pause about the capacity for machines to mimic human speech patterns.
"We don't really have to worry about computers taking over language until we see them playing charades.
"Now, if and when they do that, then we might want to be worried."
Written by McKenna Hadley-Burke. Produced by McKenna Hadley-Burke and Michelle Parise.