Math prof Peter Zoeller-Greer writes in response to “Rob Sheldon: Why human beings cannot design a conscious machine”:
I became a theist years ago because of my works in quantum physics (by the way: I did my Dissertation in Math on a quantum-mechanical problem).
I came to a similar rejection myself when I read Roger Penrose’s book “The large, the small and the human mind” years ago. He proposed that quantum mechanical effects in our brain have to do with free will and mind. What he wrote is similar to Rob Sheldon’s comments: QM may have very unpredictable and incalculable effects that can never be reproduced simply by exchanging e.g. a neuron through a chip. The chip may mimic the functions of a neuron, but –as you wrote- the QM-effects in such a complicated and delicate system like a brain are unpredictable and are surely different from the artificial system. So some may be right in believing that during that exchange process the system will break down somewhere.
To think this out, I put myself in the position of assuming that the converse is true, that w can design a conscious machine. This is the way one learns most. Sometimes in seminars, when the students should produce papers about diverse subjects, I intentionally give the pro-part to the students that are con-advocates and vice versa. So one is forced to “see” the world through the eyes of ones opponents
But one thing remains true for me: All the arguments and counter-arguments confirmed that one can only make probability-conclusions, not impossibility-conclusions (the latter are always cyclic, though subtle).
So in the end, f you and I were confronted with an android who in all his behavior is undistinguishable from a human being, then the only “measurable” difference is carbon vs. silicon. And just based on that to say the android is just simulating, “is” carbon-based-chauvinism… The rest of our counter-arguments always have a “…it may be that…” or “…it is probably impossible to copy all functions…” etc. in it.
So for me, if I had an encounter with such an android, that seems to have feelings, can cry (artificial) tears, would like to be seen as equal to humans (greetings from the story “I, Robot”), I would prefer to err on the side that it may be possible that this entity really has feelings. This may be wrong, but it would be terrible if I erred on the other side, wouldn’t it?
We could have almost the same discussion about humans made in a lab. Then all the technical rejections are no longer true and we come again to the core of the problem: Does a lab-human have real consciousness? But would this not be terrible if we err when deciding? But perhaps I had best not open another can of worms…
Bio: Peter Zöller-Greer was born in 1956 in Mannheim, Germany. He studied mathematics and theoretical physics in Siegen and Heidelberg. In 1981 he received his M.A. in mathematics from the University of Heidelberg. In 1990 he received his Ph.D. from the University of Mannheim for a mathematical solution to a quantum mechanical problem. From 1981 on he worked as a computer researcher at ABB Mannheim and as a lecturer at several colleges. Since 1993, he has been professor of mathematics and computer science at the State University of Applied Sciences in Frankfurt am Main, Germany.
See also: Why human beings cannot design a conscious machine: Basic physics would suggest that even that single neuron has properties that cannot be duplicated by all the world’s supercomputers running Attoflop simulations.