Rob Sheldon: Why human beings cannot design a conscious machine
|February 13, 2018||Posted by News under Artificial Intelligence, Mind, Naturalism, Physics|
Our physics color commentator Rob Sheldon offers some thoughts:
Some have suggested that we could replace the neurons in a human brain one at a time with, say, electronic circuits. Size shouldn’t be an issue, but even if it is, we can imagine thin silver wires connecting the electronics cabinet with the brain under vivisection. As I interpret it, the question is “How many wires will it take before we have transferred the consciousness to the electronics cabinet?”
Now a single neuron has 10,000 or so synapses where it connects to other neurons. Each of these synapses uses some complicated chemistry to enhance or inhibit neighboring cells. Each of those chemical reactions involves tens or hundreds of membrane-spanning active channels. And every one of those channels has a “memory” which involves not just tuning, but active maintenance, some of it from non-neuron glial cells that “feed” the neuron. The dynamic “web” of interactions is a permutation of the number of connections, so if there be 10^15 synapses in a human brain, the number of permutating connections between all those synapses is (10^15)! ==> 10^(10^15). That’s 10 with a quadrillion zeroes after it. That’s about a trillion zeroes bigger than Dembski’s universal computational bound of 10^150, so you can see the problem. If consciousness is one of those dynamic brain states, we will never find it even with the best analytical tools available.
I say all this merely to point out that brains are not computers, and we are many, many decades away from even understanding how the cells are wired, much less duplicating them with silver wires and transistors. But our questioner is a persistent man, and will no doubt ask, “But who is to say that in 2050 we can or cannot replace a single neuron with a fully functioning electronic circuit?”
Let me rephrase this to ask, “Is physics fully reductionistic? Can we describe the macroscopic behavior as the collective sum of microscopic components?”
Here’s a little story. Particle physics broke down all matter into 104 elements, and elements further divided into 3 pieces: protons, neutrons and electrons. So for a while, reductionism looked promising. Then those protons were broken down into 200+ subatomic particles, which was very discouraging. The “standard model” in 1978 managed to put all those subatomic particles into 17 pieces–6 quarks, 6 leptons, four bosons and the Higgs. Not as simple as 3, but manageable. But there’s a price. None of those 17 pieces has any explanation for its existence, they are empirically derived. Despite 40 years of particle theorists pushing technicolor, supersymmetry, SU(5), etc., nothing has further simplified the standard model. Just recently I read that 2/3 of the mass of a quark come from its kinetic energy bottled up in this “asymptotic freedom” potential. So it was worse than I thought, not only are quarks not getting simpler, but their interactions with the environment are becoming integral to their function.
Reductionism fails utterly if the environment turns out to play a central role in the deconstruction. Why? Because now you are in a vicious loop: particles depend on their friends, and their friends depend on their friends, and soon the whole universe is involved in determining the mass of that little quark.
Quantum mechanics ran into this 70 years ago—the smaller the particle, the bigger its wavefunction. Entangled particles “exist” in some spread-out state that cannot be localized. The harder you look, the further the wavefunction extends. Schroedinger’s cat, Wigner’s entangled observer, it just goes on and on.
Reductionism applied to biology produced the concept of “species”, and yet “species” can’t be defined. Why? For all the same reasons–it depends on the environment.
Example after example could be given where reductionism fails. One might even suppose there is a law–that the microscopic reflects the macroscopic. Mathematically we might argue that “scale-free” laws that look the same whether viewed with a telescope or a microscope are more likely to be universal than the simple reductionist laws we are taught in college. The whole debate over “dark matter” versus MOND (modified Newtonian dynamics) is an examination of this problem. So it not for lack of trying that reductionism fails, but for many recent failings of the early, naïve analysis.
Basic physics would suggest that even that single neuron has properties that cannot be duplicated by all the world’s supercomputers running Attoflop simulations.
This does not prevent anyone from making a “reductionist” assumption. But, like many silly philosophical positions, it flies in the face of empirical measurement, serious metaphysics, and commonsense.
Perhaps my anti-reductionist position can also be illustrated with a Klein bottle*, because the inside is also the outside.
See also: Neuroscientist: We will never build a machine that mimics our personal consciousness