'This century is crucial': Why the U.K.'s astronomer royal says humanity is at a critical crossroads
'It's an issue of ethics, which determines the extent to which we prioritize these issues,' says Martin Rees
British cosmologist and astrophysicist Martin Rees says that the decisions we make in the next century will be "crucial" to determining the fate of humanity — for better or for worse.
Spark host Nora Young spoke with Rees, who holds the prestigious post of the U.K.'s astronomer royal and recently authored On The Future: Prospects for Humanity, about what our collective future could hold, and how we'll tackle the issues we'll face down the road.
Here is part of their conversation.
In your latest book you write that scientists aren't actually much good at making predictions [about the future], but you have tried nonetheless. What made you want to write this book?
I've had the opportunity in my career to interact with scientists in different fields outside my own special area of astronomy and space science. And of course, I get very anxious. So I write my book as an astronomer, as a scientist, and as an anxious member of the human race.
And that reflects a broader concern that you have with the challenges that we face: things like climate change, overpopulation.
That's right…. It's the first century in the 45 million centuries for which the Earth's existed where one species, namely the human species, can determine what happens next.
We can either leave a depleted and despoiled planet before our successors, or we can trigger marvelous transitions to even post-human evolution. So this century is crucial.
You describe yourself as a techno-optimist. So what does that mean, first of all?
I'm very excited about the potential of technology. If we think of the rapid developments we've had in computers, mobile phones and all that, it's wonderful how fast that's been developing. But of course the fact that there are greater benefits also means there are greater risks.
And I would describe myself, although as a techno-optimist, as being rather a political pessimist because I think the gap between the way the world could be, and the way it is, is actually getting wider, and that's an ethical indictment of all of us.
Although, as you say, you're a political pessimist, you draw the analogy to a case where an asteroid is heading toward the Earth. And in that case, we as human beings would certainly get the wherewithal to try and stop that. And yet we don't seem to be doing that rapidly enough when it comes to climate change. So why don't we seem to be addressing climate change in the same way?
It is short-term thinking, isn't it? And I think even those who agree that there is going to be a serious risk of devastating climate change by the end of the century don't agree on what we should do about it now.
I think it's an issue of ethics which determines the extent to which we prioritize these issues now. And of course, for politicians, it is a rather hard sell to get them to prioritize something where the main benefit is to people decades in the future and in remote parts of the world — because climate change will hit hardest people in the tropics, et cetera.
We seem to be at this point where ... machines might begin to supersede us in some areas of intelligence. Can you sketch out what that might look like?
We know machines can surpass humans in many domains. For at least 40 years we've had pocket calculators that can do arithmetic better than us. We now have computers which can do many, many things better than us because they think much faster.
On the other hand, they have a downside in that they can carry out surveillance and facial recognition and all that, and invade privacy. And I think there's going to be a growing tension between privacy security and liberty in the next decades for various reasons.
But do you think that this is a feature of our technologies now — as they become so powerful that they, more and more, carry this incredible opportunity but inherently braided with that, this incredible risk as well?
Absolutely. The stakes are getting higher for just that reason. And I think we do have to be sure that we don't hand over to computers things that ought to be left to us because of ethical and prudential concerns.
One issue which I address in my book is the question of the change in the labour market.
Humans can be augmented to an important extent by computers — routine legal work, computer coding, medical diagnostic and surgery and all that. So there's going to be a redeployment.
And I think in order for this to happen without a societal disruption, we will have to ensure that there is heavy taxation of the robot and their owners and the big multinational companies. And that tax should be used to fund or subsidize huge numbers of jobs for carers for old people, for instance, teaching assistants, custodians in public parks — jobs where human cultures are important and where no special skill is required.
When you look in the very long term, do you think that something like a generalized intelligence in artificial intelligence might be possible or even inevitable? And what would that look like?
I think the question is whether it can interact with the real world and have sort of senses like us, you know. You can have something which can think more deeply than us about many things. But the question is whether it really understands the world.
The computer called Watson … it knew everything in Wikipedia, et cetera. But it was asked which is bigger — a shoe box or Mount Everest, and couldn't answer. And that's just an example of how, you know, it knows all these things but it doesn't really have a conception of the world. And so that's a big step which would have to be surmounted before you can say they're like humans.
I suspect that we are going to want to control all these technologies on grounds of ethics and prudence, and of course these technologies can be misused.
My worry is that whatever regulations are agreed by national academies or governments, et cetera, would be hard to enforce globally, just as the drug laws can't be enforced effectively nor the tax laws.
You're suggesting that the idea of sometime in the future leaving Earth and starting some, you know, colony on Mars for example is not actually a very good idea. Why is that?
I think it's a dangerous delusion to think that we can avoid the Earth's problems by mass emigration to Mars, because nowhere on Mars is as clement as the South Pole or top of Everest. And dealing with climate change on Earth is hard, but it's a doddle compared to terraforming Mars.
You've been a scientist for many decades. How do you think this time we're living in now compares with other decades in modern history? I mean, is it a good time for science and technology?
When I was a student in the 1960s we didn't know about the double helix, we weren't sure about the Big Bang and many other things. So there's been huge progress, and I think that will continue because the nature of science is that it's like an expanding frontier. And as it expands, the periphery gets longer and more new questions come into focus.
Many of the questions which we were addressing 40 years ago have now been settled and the questions we are addressing now couldn't even have been posed back then.
I think the pace is remaining high and science is, in a sense, advancing on a broader front than ever before.
Q&A has been edited for length and clarity. Click 'Listen' near the top of this page to hear the full conversation.