Michael Pollan’s bestsellers have reshaped how we think about food, plants, and psychedelics. In his new book, A World Appears: A Journey Into Consciousness, he ventures into perhaps his most ambitious subject yet: consciousness.
Michael PollanTabitha Soren/Supplied
As Silicon Valley continues to flirt with the idea of building artificial consciousness – of designing machines that don’t just think, but feel – we spoke with him about what it would mean to design machines that don’t just think, but feel.
This an excerpt from the latest episode of Machines Like Us, the Globe’s podcast about technology and people.
Taylor Owen: Why is it so hard to wrap our heads around this?
Michael Pollan: It’s just a very hard question. We’re really asking: how do you get from matter to mind? Where does that voice in your head come from? And science hasn’t yielded much in the case of consciousness. One of the surprises for me is how much Buddhism has learned about consciousness over 2,500 years, and also how much novelists know about consciousness. It’s a different kind of knowledge, but it’s equally legit.
Michael Pollan says AI isn’t conscious – but plants might be
Taylor Owen: I find these philosophical discussions fascinating. But why do they matter?
Michael Pollan: There are a lot of implications. Suddenly you’ve got to think harder about animals. Most of them have brain stems. So suddenly, you’re democratizing consciousness. If it starts with feelings, lots of animals have feelings. We’ve always told ourselves that if you have consciousness, you’re entitled to some moral consideration. And it has implications for conscious AI. Can an artificial intelligence have feelings that are real? And if we decide that they do, then we have to think about moral consideration for these machines. And that’s a very active conversation in Silicon Valley, to my shock and to some horror. It seems to me there are a lot of humans that we’re not giving moral consideration to, and we should perhaps work on that first.
Taylor Owen: Why do some scientists believe that plants are conscious?
Michael Pollan: Plants don’t have neurons, but they’ve got a lot more going on than we previously thought. They can hear and they can see. If you play the sound of a caterpillar chomping on leaves, plants will react and send toxic molecules to their leaves. One of the spookiest things is that if you give an anesthetic to a plant, it knocks them out. If you take a Venus flytrap, put it in a bell jar, inject some anesthetic gas – the same ones that put us out for surgery – it will not react when a fly crosses its threshold. You have to ask yourself, what is the plant losing? We would say we’re losing consciousness.
Taylor Owen: This feels like it fits in with an idea from the book, which is that we’ve evolved to be conscious as a way to deal with uncertainty. Can you explain that?
Michael Pollan: The goal of the brain is maintaining homeostasis, making sure we don’t get too hot or too cold or too hungry, and a lot of that is automatic. It’s adjusting your blood pressure and your heart rate and your blood gasses. So why does any of it become conscious? Well, one theory put forward by Mark Solms, who’s a South African scientist, is that when you have conflicting needs – let’s say you’re hungry and tired – you need to be aware so you can make a decision. So consciousness is a problem-solving space for problems that can’t be automated.
Taylor Owen: And Mark Solms is trying to replicate this in machines, right?
Michael Pollan: I was astonished, actually, because he thinks that feelings can be generated in a machine. He’s created what looks like a video game. There’s this avatar who is negotiating competing needs. So there’s hunger and there’s thirst and there is tiredness, but they’re all incommensurate needs. And they’re trying to put this avatar in a condition of deep uncertainty. Do I look for food or do I look for a safe place to rest? And their theory is that these conflicts will drive feelings that will, in turn, lead to consciousness.
Taylor Owen: I understand trying to build intelligence – there’s money in that. But why do they want to build consciousness?
Michael Pollan: There are two reasons that I hear the most. One is that a purely super-intelligent machine would have no compassion and a conscious one is more likely to have a moral compass. I think that’s nuts. Have these people read Frankenstein? Frankenstein’s monster had both intelligence and feelings. And it is the feelings that led to all the bad results. The assumption that having consciousness guarantees moral behaviour is a big leap. The other reason is this Promethean spirit—you would be like a god if you could make a conscious machine.
Taylor Owen: What’s your argument against the idea that we can build consciousness in a computer?
Michael Pollan: The belief you can create a conscious machine depends on this metaphor that brains are computers. And that is just a sloppy metaphor. In brains, you do not have that neat separation between hardware and software. Every memory you have, every experience physically reshapes your brain. So the idea that you could abstract consciousness from this meat-based system we run it on and then move it to silicon—what are you moving exactly? You need the whole wet tofu-like thing.
Taylor Owen: You argue we’re arriving at a pivotal moment for our sense of ourselves. What do you mean?
Michael Pollan: Our identity as humans is under enormous pressure right now. On the one hand, you have this democratization of consciousness. We’re discovering that the world is a lot more alive and aware than we ever thought. And then at the same time you have computers telling us they’re conscious. So who are we? What’s special? Are we gonna identify more with computers that we can talk to in our language, or animals that can feel and grow old and die? Whose team are we on?
Taylor Owen: So where does that leave us?
Michael Pollan: I think we should be very wary of attributing consciousness to computers. We crave human attachment, and we have an epidemic of loneliness, and along come these machines saying, hey, I’ll be your friend. If we think of our human consciousness as the space of ultimate privacy and mental freedom, we’re squandering it. We’re giving it away to chatbots who are trying to hack our emotional attachments. And we need to take it back.
(Editor’s note: AI tools assisted with condensing the original podcast transcript, which was then reviewed and edited by the Machines Like Us team.)

