Artificial intelligence - wonderful and terrifying - will change life as we know it
Work Shift will explore the ways in which our work lives are evolving and changing. To find out more, go to cbc.ca/workshift, where you can sign up for the Work Shift newsletter.
And on the September 3 edition of Cross Country Checkup, guest host Suhana Meharchand will ask listeners how their work lives are changing.
by Ira Basen
The HAL 9000 computer was the super smart machine in charge of the Discovery One space station in Stanley Kubrick's classic 1968 movie 2001: A Space Odyssey. For millions of moviegoers, it was their first look at a computer that could think and respond like a human, and it did not go well.
In one of the film's pivotal scenes, the two astronauts living in the space station try to return from a mission outside the spacecraft, only to discover that HAL won't allow them back in.
"Open the pod bay doors, please, HAL," Dave, one of astronauts, demands several times.
"I'm sorry Dave, I'm afraid I can't do that," HAL finally replies. "I know that you and Frank were planning to disconnect me, and I'm afraid that's something that I can't allow to happen."
The astronauts were finally able to re-enter the spacecraft and disable HAL, but the image of a sentient computer going rogue and trying to destroy its creators has haunted many people's perceptions of artificial intelligence ever since.
For most of the past fifty years, those negative images haven't really mattered very much. Machines with the cognitive powers of HAL lay in the realm of science fiction. But not anymore. Today, artificial intelligence (AI) is the hottest thing going in the field of computer science.
Canada 'lost the lead' on artificial intelligence. Here's how Toronto will get it back (CBC News)
Groundbreaking AI researcher hopes for 'radically different' ideas from Toronto lab (CBC News)
The Vector Institute will focus on a particular subset of AI called "deep learning." It was pioneered by U of T professor Geoffrey Hinton, who is now the Chief Scientific Advisor at the Institute. Hinton and other deep learning researchers have been able to essentially mimic the architecture of the human brain inside a computer. They created artificial neural networks that work in much the same way as the vast networks of neurons in our brain, that when triggered, allow us to think.
"Once your computer is pretending to be a neural net," Hinton explained in a recent interview in the Toronto office of Google Canada, where he is currently an Engineering Fellow, "you get it to be able to do a particular task by just showing it a whole lot of examples."
So if you want your computer to be able to identify a picture of a cat, you show it lots of pictures of cats. But it doesn't need to see every picture of a cat to be able to figure out what a cat looks like. This is not programming the way computers have been traditionally been programmed. "What we can do," Hinton says, "is show it a lot of examples and have it just kind of get it. And that's a new way of getting computers to do things."
"Summoning the demon"
For people haunted by memories of HAL, or Skynet in the Terminator movies — another AI computer turned killing machine — the idea of computers being able to think for themselves, to "just kind of get it", in ways that even people like Geoffrey Hinton can't really explain, is far from re-assuring.
They worry about "superintelligence" — the point at which computers become more intelligent than humans, and we lose control of our creations. It's this fear that has people like Elon Musk, the man behind the Tesla electric car, declaring that the "biggest existential threat" to the planet today is artificial intelligence. "With artificial intelligence," he asserts, "we are summoning the demon".
People who work in AI believe these fears of superintelligence are vastly overblown. They argue we are decades away from superintelligence, and we may, in fact, never get there. And even if we do, there's no reason to think that our machines will turn against us.
Yoshua Bengio of the University of Montreal, one of the world's leading deep learning researchers, believes we should avoid projecting our own psychology onto the machines we are building.
"Our psychology is really a defensive one," he argued in a recent interview. "We are afraid of the rest of the world, so we try to defend from potential attacks." But we don't have to build that same defensive psychology into our computers. HAL was a programming error, not an inevitable consequence of artificial intelligence.
"It's not like by default an intelligent machine also has a will to survive against anything else," Bengio concludes. "This is something that would have to be put in. So long as we don't put that in, they will be as egoless as a toaster, even though it could be much, much smarter than us.
So if we decide to build machines that have an ego and would kill rather than be killed then, well, we'll suffer from our own stupidity. But we don't have to do that."
Humans suffering from our own stupidity? When has that ever happened?
Click 'listen' above to hear Ira Basen's documentary on artificial intelligence and "deep learning."