Thursday February 08, 2018
Imagining the singularity: What happens when computers transcend us?
As computers and artificial intelligence grow in power and capability, it seems ever more likely that we're approaching "the singularity": the point where machine intelligence exceeds human intelligence. Could this be the dawn of a technological paradise? Or it could trigger humanity's doom? What kind of an intelligence will this be — benign or terrifying — a guru, a god or a monster? And is the idea of uploading the human mind the promise of immortality or just another dream of religious transcendence?
What happens when computers transcend us?
We've been beaten at chess, Jeopardy!, poker and Go. Computers can now speak our languages and recognize our faces. Artificial intelligence is making dramatic inroads in science, advertising and politics, and of course in our social lives through giant tech businesses like Google and Facebook. Computers are simply better than us in an increasing number of domains. And many researchers believe that this trend will continue. Superhuman computer intelligence is not very far away.
So what happens when computers transcend us? The notion of the singularity, popularized by inventor and futurist Ray Kurzweil, is that it will be such a profound transformation of our technology, economy and society, that it may well be impossible to imagine or anticipate. It would be like seeing beyond the event horizon of a black hole.
However, many thinkers are trying to push beyond that imagined barrier, in part to plan for the emergence of machine superintelligence so that it doesn't bring catastrophe.
"It could literally be the best thing that ever happened in all of human history if we get this right. And my main concern with this transition to the machine intelligence era is that we get that superintelligence done right, so it's aligned with human values. A failure to properly align this kind of artificial superintelligence could lead to human extinction or other radically un-desirable outcomes." – Nick Bostrom, Professor of Professor of Philosophy and Director of the Future of Humanity Institute at the University of Oxford.
Professor Nick Bostrom suggests we often mischaracterize the threat of machine intelligence — imagining Terminator-like robots devoted to our extinction. In fact the greater threat might be from machines that don't care about us, and generate their own objectives and goals. Our extinction might not be a result of hostile intent, but an incidental result of machines pursuing their own goals without care or concern for humanity.
Superintelligence and human values
Computer scientists have recognized this problem, but face a difficult task as they try to develop superintelligence: We don't really know how to instill human values in a machine. Doina Precup, a computer scientist from McGill University suggests that one strategy might be simply having them learn by watching us — like children do. But this means we need to be very careful about what kind of behaviour we model for our machines, and makes our responsibility for them explicit. In a way these will be our children.
Another challenge might be that a computer superintelligence could be superhuman without being very human-like — making it challenging for us to relate to. Chris Eliasmith, Director of the Centre for Neuroscience at the University of Waterloo, says we should be able to create machines which duplicate the functions of a brain — he's working on creating machine intelligence modelled on the organization and function of the human brain. But the electronic brain could host an intelligence very different from a human mind.
It's not only technologists, however, who are attempting to contemplate the rise of machine superintelligence. Science fiction writer and futurist Madeline Ashby points out that this is part of a long history of humanity thinking about creation — whether it's in ancient religious stories, or the more modern stories of creation that perhaps started with Frankenstein and continue through modern science fiction.
"We are fascinated with the idea of creating in our own image. The anxiety around creating an intelligent being, a thinking being, no matter what shape it may take, is the anxiety of creating of yourself, of having a child, of creating beyond yourself, and seeing your own flaws reflected back to you." – Madeline Ashby, writer and futurist.
Superintelligent machines as prophets or gurus
Another way to imagine our relationship with the superintelligent machines is to think about the gods of mythology. As Theologian James McGrath points out, we have a long history of imagining our relationship with beings that are superior to us, and many of these are cautionary tales.
McGrath, however, imagines many more positive outcomes in our relationship with superintelligent machines, including the possibility that they might use their abilities to develop religious thought — and answer some of the great religious and theological questions that humanity has struggled with. They might even become new prophets or gurus guiding our thinking about existence.
One feature of the singularity that has attracted a lot of attention is the notion that we might become the machine superintelligence, by uploading our minds to computers. Robin Hanson, an economist and futurist has explored this scenario in his book The Age of Em, and comes to a surprising conclusion. An uploaded life would not be a fantasy world of virtual-reality paradises. It would be a working world in which uploaded minds labour to earn their electricity and server space. In that, at least, a digital afterlife would be strikingly similar to our biological life.
Guests in this episode:
- Chris Eliasmith is Director of the Centre for Theoretical Neuroscience, and Canada Research Chair in Theoretical Neuroscience at the University of Waterloo. He's also Co-founder of Applied Brain Research.
- Doina Precup is an Associate Professor in the School of Computer Science at McGill University in Montreal, and head of the Deepmind lab in Montreal.
- Nick Bostrom is Professor of Philosophy and Director of the Future of Humanity Institute at the University of Oxford.
- Madeline Ashby is a Science Fiction writer and Futurist in Toronto.
- James McGrath is a Professor of Religion and Clarence L Goodwin Chair in New Testament Language and Literature at Butler University in Indianapolis.
- Robin Hanson is Associate Professor of Economics at George Mason University, and a Research Associate at the Future of Humanity Institute.
- Superintelligence: Paths, Dangers, Strategies by Nick Bostrom, published by Oxford University Press, 2014.
- The Age of Em: Work, Love and Life when Robots Rule the Earth by Robin Hanson, Oxford University Press, 2016.
- How to Build a Brain: A Neural Architecture for Biological Cognition by Chris Eliasmith, Oxford University Press, 2013.
- How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil, Viking Penguin, 2012.
**This episode was produced by Jim Lebans.