The Current

Do you swear at Alexa? What our treatment of AI assistants says about humans

Do you swear or lash out at Siri or Google when the AI assistant doesn’t follow your commands? We talk to experts about what our interactions with the devices could say about human beings.

The way we treat digital assistants may say something about ourselves, says philosopher David Gunkel

An Amazon Alexa device is switched on in Seattle. Experts have differing ideas about what yelling at your Alexa or other AI devices says about human behaviour. (Elaine Thompson/Associated Press)

Read Story Transcript

You've probably been there before — asking Siri or Google to play your favourite song, or maybe your favourite radio station, but the digital assistant just doesn't understand you.

Did you lash out in frustration? Hurl a curse word or two its way?

Some people see these forms of AI simply as machines designed to take our orders. But one expert argues the way we treat digital assistants may say something more about ourselves as human beings.

"The German philosopher Immanuel Kant had this way of thinking about animals and their standing in our social sphere," said David Gunkel, a philosopher and professor at Northern Illinois University.

David Gunkel is a philosopher and professor at Northern Illinois University. He says feeling guilty about getting mad at AI devices like Siri or Google is a reasonable reaction. (Submitted by David Gunkel)

"He was no animal rights advocate but he said, you know, we should worry about kicking the dog, because if we kick the dog it makes us callous. It demonstrates a kind of interaction with other creatures in our social world that could translate to how we treat each other."

'Completely different' social consequences

Gunkel told The Current's Anna Maria Tremonti that feeling bad about swearing at an AI device is a reasonable reaction.

Julie Carpenter, however, argued the social consequences of lashing out at a dog versus AI are "completely different."

Carpeneter studies robot-human interactions with emerging technologies. She told Tremonti people should be free to treat AI devices however they see fit.

"Clearly if you harm an animal you're going to see the reactions, the pain, the bewilderment, the horror of that actual animal," Carpenter said.

"If you're rude to AI, you might get a little Easter egg response pulled in by the developers."

Robots as legal entities?

However, that's not to say the way humans view AI devices like Siri and Google won't change in the future.

Gunkel predicts our emotional attachment to AI devices like Siri will change over time. (iStockphoto)

Just as corporations are considered persons under the law, the same could happen to robots and AI down the road, said Gunkel.

"This is going to evolve over time, that are our relationships and our emotional attachments will mature in various ways," he said.

"And the direction that that matures, I think, is something we have to pay particular attention to — both morally, socially and also legally."

To discuss how we treat AI, Tremonti spoke with:

  • Julie Carpenter, research fellow in the Ethics and Emerging Sciences Group at California Polytechnic State University.
  • David Gunkel, philosopher and distinguished teaching professor of communication technology at Northern Illinois University.

Click 'listen' near the top of this page to hear the full conversation.

Written by Kirsten Fenn. Produced by Alison Masemann.