The case against predictability

Everything we do is analyzed, measured, and quantified to create a model of us online, which then tries to influence our behavour. But how accurate is our quantified self?

Algorithms take our measure and influence our behaviour. But who's interest does this serve?

Do we really want to be that predictable and efficient? (Adam Killick)

Originally published Nov. 1, 2019.

Algorithms and digital technologies try to predict, or even influence, our behaviour on scales both large and small. Amazon, Google and Facebook all have an image of us. But does that truly reflect how we are in real life?

John Cheney-Lippold suggests not: rather, the profiles that algorithms build of us are more reflective of how those companies want to see us, in ways that give value to them.

Cheney-Lippold teaches American culture and digital studies at the University of Michigan. He's also the author of We are Data: Algorithms and the making of our digital selves.

He explained to Spark host Nora Young that the massive algorithms that drive Google, Amazon and Facebook create a picture of us aimed at maximizing profits. This may be at odds with how we actually are.
John Cheney-Lippold (University of Michigan)

However, the algorithms still serve an important function. They help us narrow down the myriad choices we face today when deciding on a product to buy, or even a place to eat, which is not necessarily a bad thing, he said.

And the idea of some external device limiting our choices isn't new either, he said. Paper maps do the same thing, because they only show certain roads or routes that are deemed sufficiently useful. "Our maps are, in themselves, somewhat of a limitation of what is and is not in the world."

Similarly, decades ago, long before dating-app algorithms matched people, we tended to find partners within a similar geographic area, class, or race, he pointed out.

Perhaps more seriously, though, the algorithms simply aren't smart enough to create a picture of what we actually feel, even though some try, such as facial-recognition software that has been used by some employers to find suitable candidates for a job.

"It's measuring not happiness, but what we can call epiphenomena of happiness. The phenomena that happens after happiness: me smiling or maybe using words like 'great' or 'excellent' instead of 'I'm sad' or 'depressed'. And so for me this is an important distinction because we don't have the language to talk about these new conceptions of happiness."
John Cheney-Lippold's 2018 book, We Are Data (NYU Press)

All that leads to the creation of an identity online that isn't actually how we are, but is just an interpretation. Worse, that interpretation is not necessarily for our benefit—even though it influences our choices and behaviour.

"There are algorithms that give you a bonus if you go to the gym often, but these are not algorithms that are trying to figure out how can they make you or me a better person holistically—it is, 'how can an algorithm make a person a better worker'," he said.

And perhaps more sinister, identity issues like citizenship and other democratic rights may be influenced by algorithms.

Cheney-Lippold said the US National Security Agency has been using an algorithm that attempts to determine whether someone is a citizen by analyzing the metadata contained in their personal communications. 

"Are you friends of people who are foreign or are you friends of people who are a citizen? Do you speak English or do you not speak English? Do you encrypt your communications?

"This is an interesting thing. If you encrypted communications you seem to be less like a citizen than somebody who doesn't encrypt communications," he said.

He suggested we're taking what should be an open discussion about public and democratic rights and lodging in an algorithmically calculated database.

"And so what happens is that the idea of what a citizen is, which is more or less at the cornerstone of what debates about the public and democratic rights should be, is that actually being taken away from us and made illegible so that it's lodged in a database is algorithmically calculated."

Worst, the algorithms themselves that predict our behaviour are notoriously secret, private and guarded, he said. 

"They're trillion dollar trade secrets. If Coca Cola's recipe is one thing, Google's algorithm is 40 times that. They're silos of understanding ourselves, for purposes not for ourselves.

"I think that's the most problematic."