Social media companies rely on algorithms to try to match their users with content that might interest them. But what happens when that process goes haywire?
Over the past two weeks, there have been some serious fails with algorithms, which are the formulas or sets of rules used in digital decision-making processes. Now people are questioning whether we're putting too much trust in the digital systems.
As companies seek solutions, there's one clear standout: the algorithms making the automated decisions that shape our online experiences require more human oversight.
The first case in a recent string of incidents involved Facebook's advertising back end, after it was revealed that people who bought ads on the social network were able to target them at self-described anti-Semites.
Disturbingly, the social media giant's ad-targeting tool allowed companies to show ads specifically to people whose Facebook profiles used language like "Jew hater" or "How to burn Jews."
If Facebook's racist ad-targeting weren't cause enough for concern, right on the heels of that investigation, Instagram was caught using a post that included a rape threat to promote itself.
After a female Guardian reporter received a threatening email that read, "I will rape you before I kill you, you filthy whore!" she took a screen grab of the hateful message and posted it to her Instagram account. The image-sharing platform then turned the screen shot into an advertisement, targeted to her friends and family members.
And lest it seem social media companies are the only ones afflicted by this rash of algorithms gone rogue, it seems Amazon's recommendation engine may have been helping people buy bomb-making ingredients together.
Just as the online retailer's "frequently bought together" feature might suggest you purchase salt after you've put an order of pepper in your shopping cart, when users purchased household items used in homemade bomb building, the site suggested they might be interested in buying other bomb ingredients.
So what do these mishaps have to do with algorithms?
The common element in all three incidents is that the decision-making was done by machines, highlighting the problems that can arise when major tech firms rely so heavily on automated systems.
'On these free platforms, you and your data are often the product.' — Jenna Jacobson, Ryerson University postdoctoral fellow
"Driven by financial profit, many of the algorithms are operationalized to increase user engagement and improve user experience," says Jenna Jacobson, a postdoctoral fellow at Ryerson's Social Media Lab.
"On these free platforms, you and your data are often the product, which is why it makes financial sense for the platforms to create a personalized experience that keeps you — the user — engaged longer, contributing data and staying happy."
The goal is to try to match users with content or ads based on their interests, in the hope of providing a more personalized experience or more useful information.
We've grown "dependent on algorithms to deliver relevant search results, the ability to intuit news stories or entertainment we might like," says Michael Geist, a professor at University of Ottawa and Canada Research Chair in internet and e-commerce law.
These formulas, or automated rule sets, have also become essential in managing the sheer quantity of posts, content and users, as platforms like Facebook and Amazon have grown to mammoth global scales.
In the case of Amazon, which has over 300 million product pages on its U.S. site alone, algorithms are necessary to monitor and update recommendations effectively, because it's just too much content for humans to process, and stay on top of, on a daily basis.
But as Geist notes, the lack of transparency associated with these algorithms can lead to the problematic scenarios we're witnessing.
In the case of Facebook's racist ad-targeting, it's not that the company has been accused of intentionally setting up an anti-Semitic demographic.
Rather, the concern is that lacking the right filters or contextual awareness, the algorithms that developed the list of targetable demographics based on people's self-described occupations identified "Jew haters" as a valid population grouping — in direct conflict with company standards.
While the likes of Amazon, Facebook and Instagram have been able to talk in circles around similar issues, citing freedom of speech or leaning heavily on the fact that they're not responsible for posted content, with this latest wave of controversies it's harder to sidestep criticism.
An Amazon rep responded by saying, "In light of recent events, we are reviewing our website to ensure that all these products are presented in an appropriate manner."
Facebook's chief operating officer Sheryl Sandberg called their algorithmic mishap a fail on their part, adding they "never intended or anticipated this functionality being used this way — and that is on us." That's a remarkable admission of their role in users' experiences on the site, given the social giant's long-standing hesitancy to take responsibility for how content is delivered on the platform.
The companies were also quick to state their commitment to fixing their algorithms, notably by adding more human oversight to their digitally managed processes.
And that is the punchline — or perhaps the silver lining — in all these cases; at least at this stage, the only way to keep these algorithms in check is to have more humans working alongside them.
"I think the tide is changing in this area, with increased demands for algorithmic transparency and greater human involvement to avoid the problematic outcomes we've seen in recent weeks," says Geist.
But real change is going to require a philosophical shift.
Up to now, companies have focused on growth and scaling, and to accommodate their massive sizes they have turned to algorithms.
As Jacobson notes, "algorithms do not exist in isolation," and as long as we rely solely on algorithmic oversight of things like ad targeting, ad placement and suggested purchases, we'll see more of these disturbing scenarios, because while algorithms might be good at managing decision-making on a massive scale, they lack the human understanding of context and nuance.