Uncommon Descent Serving The Intelligent Design Community

Rob Sheldon: Why human beings cannot design a conscious machine

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Our physics color commentator Rob Sheldon offers some thoughts:


Some have suggested that we could replace the neurons in a human brain one at a time with, say, electronic circuits. Size shouldn’t be an issue, but even if it is, we can imagine thin silver wires connecting the electronics cabinet with the brain under vivisection. As I interpret it, the question is “How many wires will it take before we have transferred the consciousness to the electronics cabinet?”

Now a single neuron has 10,000 or so synapses where it connects to other neurons. Each of these synapses uses some complicated chemistry to enhance or inhibit neighboring cells. Each of those chemical reactions involves tens or hundreds of membrane-spanning active channels. And every one of those channels has a “memory” which involves not just tuning, but active maintenance, some of it from non-neuron glial cells that “feed” the neuron. The dynamic “web” of interactions is a permutation of the number of connections, so if there be 10^15 synapses in a human brain, the number of permutating connections between all those synapses is (10^15)! ==> 10^(10^15). That’s 10 with a quadrillion zeroes after it. That’s about a trillion zeroes bigger than Dembski’s universal computational bound of 10^150, so you can see the problem. If consciousness is one of those dynamic brain states, we will never find it even with the best analytical tools available.

I say all this merely to point out that brains are not computers, and we are many, many decades away from even understanding how the cells are wired, much less duplicating them with silver wires and transistors. But our questioner is a persistent man, and will no doubt ask, “But who is to say that in 2050 we can or cannot replace a single neuron with a fully functioning electronic circuit?”

Let me rephrase this to ask, “Is physics fully reductionistic? Can we describe the macroscopic behavior as the collective sum of microscopic components?”

Here’s a little story. Particle physics broke down all matter into 104 elements, and elements further divided into 3 pieces: protons, neutrons and electrons. So for a while, reductionism looked promising. Then those protons were broken down into 200+ subatomic particles, which was very discouraging. The “standard model” in 1978 managed to put all those subatomic particles into 17 pieces–6 quarks, 6 leptons, four bosons and the Higgs. Not as simple as 3, but manageable. But there’s a price. None of those 17 pieces has any explanation for its existence, they are empirically derived. Despite 40 years of particle theorists pushing technicolor, supersymmetry, SU(5), etc., nothing has further simplified the standard model. Just recently I read that 2/3 of the mass of a quark come from its kinetic energy bottled up in this “asymptotic freedom” potential. So it was worse than I thought, not only are quarks not getting simpler, but their interactions with the environment are becoming integral to their function.

Reductionism fails utterly if the environment turns out to play a central role in the deconstruction. Why? Because now you are in a vicious loop: particles depend on their friends, and their friends depend on their friends, and soon the whole universe is involved in determining the mass of that little quark.

Quantum mechanics ran into this 70 years ago—the smaller the particle, the bigger its wavefunction. Entangled particles “exist” in some spread-out state that cannot be localized. The harder you look, the further the wavefunction extends. Schroedinger’s cat, Wigner’s entangled observer, it just goes on and on.

Reductionism applied to biology produced the concept of “species”, and yet “species” can’t be defined. Why? For all the same reasons–it depends on the environment.

Example after example could be given where reductionism fails. One might even suppose there is a law–that the microscopic reflects the macroscopic. Mathematically we might argue that “scale-free” laws that look the same whether viewed with a telescope or a microscope are more likely to be universal than the simple reductionist laws we are taught in college. The whole debate over “dark matter” versus MOND (modified Newtonian dynamics) is an examination of this problem. So it not for lack of trying that reductionism fails, but for many recent failings of the early, naïve analysis.

Basic physics would suggest that even that single neuron has properties that cannot be duplicated by all the world’s supercomputers running Attoflop simulations.

This does not prevent anyone from making a “reductionist” assumption. But, like many silly philosophical positions, it flies in the face of empirical measurement, serious metaphysics, and commonsense.

Perhaps my anti-reductionist position can also be illustrated with a Klein bottle*, because the inside is also the outside.

Note: Rob Sheldon is author of Genesis: The Long Ascent

See also: Neuroscientist: We will never build a machine that mimics our personal consciousness

*Klein bottle?

Comments
The motions of all the water molecules in a kettle is untraceable with even current day super computers, not to mention everything external that might effect it. But fortunately for us, we're not really interested in producing on explaining most of those properties, even though they are of the overwhelming majority of what happens in physics. This is because none of them have an bearing on what we what to do, if what we want is to make tea. We can make progress in this case, and others, because they can be expressed in terms of high-level phenomena that is quasi-autonomous - nearly self contained. When explanations resolve at higher levels, this is emergence. IOW, the behavior of of high-level physical quantities consists of nothing but the behavior of their low-level constituents with most of the details ignored. And this is just one example. Reductionism is a misconception because it requires the relationship between all levels of explanation to aways be reductionist in nature. But very often it is not. And we can make progress regardless. So, it's unclear that we actually need to trace what would be physically untraceable to make progress.critical rationalist
February 18, 2018
February
02
Feb
18
18
2018
03:11 PM
3
03
11
PM
PDT
Answering "can it be done in theory" with "it cannot be done in practice" is somewhat persuasive, but then you'll have to deal with claims that humans would never fly, or no one will every need more than 64k of memory. Much more persuasive is a proof that the mind can do things that are in principle impossible for any kind of Turing machine, e.g. solve the halting problem.EricMH
February 16, 2018
February
02
Feb
16
16
2018
07:57 AM
7
07
57
AM
PDT
Samson, Samson, the Physicists are upon you!kairosfocus
February 14, 2018
February
02
Feb
14
14
2018
01:52 AM
1
01
52
AM
PDT
Hey Rob Sheldon - I don't think we need to defeat these inane arguments only with an appeal to large numbers, because some naive person will always figure that engineering progress will defeat your, "can't be done" theory. Isn't it true that all of these "build a brain from circuits" theories always suffer from the attempt to violate the Heisenberg principle. Don't they all run into the problem of knowing the energies of many particles within too restricted time scale. (Energy and time of course being complementary variables.JDH
February 14, 2018
February
02
Feb
14
14
2018
12:25 AM
12
12
25
AM
PDT
On the dot. Especially the part about active maintenance and glia. Tack on hormones and the infinite constant ANALOG feedback loops within and outside the brain, possibly including the gut bacteria. You simply CAN'T GET THERE with digital.polistra
February 13, 2018
February
02
Feb
13
13
2018
04:06 PM
4
04
06
PM
PDT

Leave a Reply