What if consciousness is just a hallucination?
New book challenges our perceptions of perception
As humans, we have a feeling of what it's like to be us, to be a self, alive and having experiences in the world. In short, we're conscious.
But what if that sense of a consciousness in the mind, that perceives and reacts to the world out there, is just an illusion?
In his new book, Being You, neuroscientist Anil Seth proposes a theory of consciousness called the 'beast machine theory', that suggests consciousness is merely a result of 'controlled hallucinations' that are our brain's attempt to keep us alive.
Seth spoke to Spark host Nora Young about how his theory challenges some of our deeply held beliefs about consciousness and suggests that artificial intelligence will likely never achieve it.
Here is part of their conversation.
What is the beast machine theory?
It's a bit of an evocative term that actually goes back to René Descartes, the hugely influential 17th century philosopher who was probably the first to make consciousness a really difficult problem. He was famous for separating the universe into two modes of existence: matter stuff and mind stuff.
The other thing he was famous or infamous for at the time, was denying that non-human animals had any meaningful conscious existence or selfhood of the kind that had any ethical or moral status, so he called non-human animals "bête-machines."
Other animals seem to be made of the same stuff as humans — flesh and blood, they were living creatures. But they still lacked the rational, conscious minds that gave humans their special status.
And I use the term almost to say the opposite, to rehabilitate the term because I think that our conscious experiences of the self and the world are intimately tied to our nature as living creatures. Emotion — this basic feeling of just being a living organism — that's where consciousness and the experience of being a self starts. And being a living creature is therefore fundamental to that.
So instead of saying that our nature as beast machines is irrelevant, I think what matters is that we experience the world and the self — with, through, and because of our living bodies. We are conscious beast machines, and I think that's a good thing. It brings us closer to the rest of nature rather than further apart.
So then what is the sense of selfhood from a beast machine perspective?
It can seem like the self is a unified entity, but actually, that thing that it is to be me, to be anyone, has many different parts.
There's the experience of being a person with a name and a set of memories, a social identity that is continuous over time and develops over weeks and months and years. There are experiences of perceiving the world from a particular point of view. And then there are experiences of volition and of agency, what we might typically call free will. And then below even that, there are these experiences of emotion and of mood and of this object in the world that is my body.
And the beast machine aspect of selfhood is really, for me, the bottom level, the thing that undergirds and underpins everything else.
What does this mean for how we think about artificial intelligence and in particular, general A.I?
In one sense, there are strong parallels between a lot of AI algorithms and emerging ideas about how we think the brain mechanisms of perceptual experience work. But then there's more philosophical questions about the possibility of building a conscious artificial intelligence.
There's a lot of different opinions. One is that consciousness is just a function of intelligence, but I think they're actually very distinct and it's another reflection of this unfortunate human tendency to think of ourselves as super-special and at the centre of everything.
For me anyway, the most basic expressions of consciousness don't really have anything to do with intelligence, but rather with perceiving and regulating the body. So the prospect of building a conscious machine is perhaps further away than other people think. It could be that a conscious machine would also have to be a living machine.
And by the way, I really don't think we should be even trying to build a conscious machine! It's often thought of as being an unstated or explicitly stated goal, but I've never heard a good argument for why we should be doing that.
So even though it seems very unlikely, I think it's worth thinking in advance about how we can, in fact, put some guardrails around AI to make sure that we don't inadvertently generate conscious machines and not just relegate it to science fiction scenarios.
This Q&A has been condensed and edited for length and clarity. To hear the full conversation with Anil Seth, click the 'listen' link at the top of the page.