Thanks to johnnyb for alerting us to John Searle’s talk at Google in his last post. Johnny said he had only listened to about 45 minutes in when he wrote his post. Too bad, because the best part of the entire vid is the following colloquy between a questioner and Searle that begins at 58:25:
The questioner posits the following:
You seem to take it as an article of faith that we are conscious, that your dog is conscious, and that that consciousness comes from biological material, the likes of which we can’t really understand. But – forgive me for saying this – that makes you sound like an intelligent design theorist, who says that because evolution and everything in this creative universe that exists is so complex, that it couldn’t have evolved from inert material. So somewhere between an ameba and your dog, there must not be consciousness, and I am not sure where you would draw that line. And so if consciousness in human beings is emergent or even in your dog, at some point in the evolutionary scale, why couldn’t it emerge from a computation system that is sufficiently distributed, networked, and has the ability to perform many calculations and maybe is even hooked into biologic systems?
Well, about ‘could it emerge,’ miracles are always possible. You know. How do you know that you don’t have chemical processes that will turn this [holding up comb] into a conscious comb? OK. How do I know that? Well, it’s not a serious possibility. I mean the mechanisms by which consciousness is created in the brain are quite specific, and remember – this is the key point – any system that creates consciousness has to duplicate those causal powers. It’s like saying, ‘you don’t have to have feathers in order to have a flying machine, but you have to duplicate and not merely simulate the causal power of the bird to overcome the force of gravity in the earth’s atmosphere.’ That’s what airplanes do. They duplicate causal powers. They use the same principal, Bernoulli’s principal, to overcome the force of gravity. But the idea that somehow or other you might do it just by doing a simulation of certain formal structures of input-output mechanisms, of input-output functions; well, miracles are always possible, but it doesn’t seem likely. That’s not the way evolution worked.
The questioner responds:
But machines can improve themselves, and you are making the case for why an ameba could never develop into your dog over a sufficiently long period of time and have consciousness.
No, I didn’t. No.
You’re refuting that consciousness could emerge from a sufficiently complex computation system.
Complexity is always observer relative. If you talk about complexity you have to talk about the metric. What is the metric by which you calculate complexity? I think complexity is probably irrelevant. It might turn out that the mechanism is simple. There is nothing in my account that says a computer could never become conscious. Of course. We’re all conscious computers as I said. And the point about the ameba is not that amoebas can’t evolve into much more complex organisms. Maybe that’s what happened. But the ameba as it stands, a single-celled organism, that doesn’t have enough machinery to duplicate the causal powers of the brain. I am not doing a science fiction project to say ‘well, there can never be an artificially created consciousness by people busy designing computer programs.’ Of course, I am not saying that is logically impossible. I’m just saying it is not an intelligent project. If you’re thinking about your life depends on building a machine that creates consciousness, you don’t sit down at your console and start programing things in some programing language. It’s the wrong way to go about it.