Evolutionists believe that mind can rise from matter. From atoms configured into molecules, configured into cells, configured into tissues, configured into a brain, mind can rise. Their molecules-to-man evolution story is in fact the narrative of the emergency of mind from matter. Here, in a sense, evolutionism and artificial intelligence (AI) meet in developing a fallacious more-from-less scenario.
For example, an evolutionist says:
I think that “larger objects” have properties not possessed by their parts. These properties include the capacity to have purposes, designs, moral principles, beauty, love, anger, and fear.
According to this evolutionist naturalistic conception, a “larger object” is simply a specific configuration of atoms, enough large to develop the emergent properties. The belief that properties as those listed in the quote can spontaneously emerge from large configurations of atoms is called “emergentism”. Practically we could consider “emergentism” as an alias of “evolutionism”.
The “larger object” can be also the brain, filled with neural networks, where processes and states happen as effects of algorithms. So the “emergentism” expressed above in terms of hardware – so to speak – can also be expressed in terms of software. It is exactly what, for example, Roger Penrose does:
In my opinion, it is conceivable that in an algorithm there is a threshold of complication beyond which the algorithm shows mental qualities. [The Emperor’s New Mind, chap.1]
Let’s see, in simplest terms, why mind is neither a configuration of atoms, nor a process or algorithm in the organism. It is common experience that mind recognizes “purposes, designs, moral principles, beauty, love, anger, and fear”. What recognizes configurations, states, processes is not one of such configurations, states, processes. The “recognizer” cannot emerge from what it recognizes. The binary relation between recognizer and recognized cannot be reduced to a single point. Example: what sees is different from what is seen; the eye cannot see itself. Analogously mind, who recognizes what happens in the brain, is different from what happens. Mind cannot arise, as emergent property, from the neural processes it sees. This a matter of principle.
Against this reasoning, emergentism doesn’t help evolutionists. It is useless to say that “systems may have properties not possessed by their parts”. Depending from the specific system and its parts, a system can have, yes, certain additional properties, but not whatsoever properties. Natural example: while a single water molecule doesn’t form ice crystals, a set of water molecules shows the emergent property of forming ice crystals, at a certain temperature. But no set of water molecules shows, say, the emergent property of self-inflaming. The cause of all that is the physical laws. Artificial example: an airplane has the property of flying, which its parts have not, but an airplane cannot have the property, say, of creating moral laws from thin air. What allows an airplane to fly is its intelligent design (ID). It is ID the cause adding to the parts of the airplane the capacity to fly by mean of an apt assembly (beyond obviously having designed the parts themselves).
So the controversy is not if systems can have properties not possessed by their parts. They can have some. In general, the controversy is about what properties a specific system and parts can develop and what causes the arise of such properties. Specifically, I claim that human mind is not a property emerging from biological or artificial hardware configurations or software processes when their complication become large enough. And I claim that, much less, mind can be the result of an unguided material process, as cause. It is, yes, possible to fabricate artificial neural networks (“artificial brains”) but it is impossible to artificially create a human mind from chemicals in the lab. Mind is not a mere by-product of matter.
Thus, in the quote cited at the beginning, the problem is not the first statement “larger objects have properties not possessed by their parts”, rather the second one: “these properties include the capacity to have purposes, designs…”. If the “larger object” is the brain, or even an entire organism, its emergent properties do not include “the capacity to have purposes, designs…”. Mind doesn’t arise bottom-up. Mind overarches body, brain and matter.
Analogously, to say that mind is a property of the brain, is just defective. It would change nothing to say that mind is a property of the whole organism. In any case mind is not simply a property or attribute of large systems. Because a property of a thing cannot be the recognizer of the thing and its properties. Example, a banana has the property of being yellow. The property of being yellow cannot recognize the banana and its properties.
As always the problem is a priori materialism, which flattens any hierarchy. Between mind and matter there is an ontological hierarchy. Every man daily experiments this hierarchy, by using his mind to dominate matter. Unfortunately evolutionists forget this direct scientific experience to believe a fully unsubstantiated and biased faith, which materialism is.