One of the underlying debates linked to the design issue is the notorious mind-brain gap challenge.
It keeps coming up, and on both sides of the ID debate.
I would therefore like to spark a bit of discussion with a clip from a Scott Aaronson Physics course lecture:
>> . . . If we interpret the Church-Turing Thesis as a claim about physical reality, then it should encompass everything in that reality, including the goopy neural nets between your respective ears. This leads us, of course, straight into the cratered intellectual battlefield that I promised to lead you into.
As a historical remark, it’s interesting that the possibility of thinking machines isn’t something that occurred to people gradually, after they’d already been using computers for decades. Instead it occurred to them immediately, the minute they started talking about computers themselves. People like Leibniz and Babbage and Lovelace and Turing and von Neumann understood from the beginning that a computer wouldn’t just be another steam engine or toaster — that, because of the property of universality (whether or not they called it that), it’s difficult even to talk about computers without also talking about ourselves.
So, I asked you to read Turing’s second famous paper, Computing Machinery and Intelligence. Reactions?
What’s the main idea of this paper? As I read it, it’s a plea against meat chauvinism. Sure, Turing makes some scientific arguments, some mathematical arguments, some epistemological arguments. But beneath everything else is a moral argument. Namely: if a computer interacted with us in a way that was indistinguishable from a human, then of course we could say the computer wasn’t “really” thinking, that it was just a simulation. But on the same grounds, we could also say that other people aren’t really thinking, that they merely act as if they’re thinking. So what is it that entitles us to go through such intellectual acrobatics in the one case but not the other?
If you’ll allow me to editorialize (as if I ever do otherwise…), this moral question, this question of double standards, is really where Searle, Penrose, and every other “strong AI skeptic” comes up empty for me. One can indeed give weighty and compelling arguments against the possibility of thinking machines. The only problem with these arguments is that they’re also arguments against the possibility of thinking brains!
So for example: one popular argument is that, if a computer appears to be intelligent, that’s merely a reflection of the intelligence of the humans who programmed it. But what if humans’ intelligence is just a reflection of the billion-year evolutionary process that gave rise to it? What frustrates me every time I read the AI skeptics is their failure to consider these parallels honestly. The “qualia” and “aboutness” of other people is simply taken for granted. It’s only the qualia of machines that’s ever in question.
But perhaps a skeptic could retort: I believe other people think because I know I think, and other people look sort of similar to me — they’ve also got five fingers, hair in their armpits, etc. But a robot looks different — it’s made of metal, it’s got an antenna, it lumbers across the room, etc. So even if the robot acts like it’s thinking, who knows? But if I accept this argument, why not go further? Why can’t I say, I accept that white people think, but those blacks and Asians, who knows about them? They look too dissimilar from me . . . >>
I think we can safely lay aside the attempted reduction to racism argument as unworthy.
What I find interesting here, is the ease with which the challenge of finding the many islands of function in vast configuration spaces is brushed aside by simply imposing an evolutionary materialist assumption. But, the implied search challenge is far more directly evident as sound, than is an ideologically constrained materialist reconstruction of a model past that we did not and cannot observe. So, just maybe, we have a reason here to reject the notion of writing the relevant algorithms through incremental blind chance circumstances and variations filtered through culling on differential reproductive success.
My second problem is the easy equation of brains and minds that seems to pervade the frame of thought.
Notice: meat chauvinism.
Dismissive, loaded rhetoric, comfortable in the dominance of a priori evolutionary materialism.
But the relevant division — never mind the obscurant Neil deGrasse Tysons with their dismissal of philosophy (which simply blindly traps them in ideologies) — is not between meat and silicon etc, but between dust and spirit. Not for nothing does Genesis talk of God forming man’s body from the dust of the ground and then breathing in the celestial fire of spirit leading to a complex unity, the embodied living soul/self.
However we choose to read the genre, the essential ontological point stands.
Golems notwithstanding.
I raise, therefore, the issue that it is self-evident that rocks have no dreams (so also, no beliefs, and no knowledge as that is warranted, credibly true belief); indeed, that they compute blindly, they do not contemplate insightfully:
If you doubt the distinction I am here underscoring, simply ask: why am I AWARE that I am in a state of doubt?
Would yon rock be able to convince me it is sufficiently self aware to doubt tike this?
To ask the question and contemplate a conversation with a rock is to answer it.
Nor, is it materially different if the rock has been processed into a PC:
The transformation into gates, signal flow paths and memory cells backed up by magnetic storage etc does not resolve the problem that in our actual observation and experience of computation, a material substrate is intelligently designed but blindly executes signal flows and operations based on its architecture and software.
For, Garbage IN, Garbage OUT.
And, as for the racism issue, it seems fairly evident — at minimum, on a common-sense basis (hence the issue of cruelty to animals . . . ) — that at least some other animals may have some degree of that sort of self awareness, so the notion of ruling arbitrary lines between groups of humans is frankly silly.
Where does all of this point?
To the obvious: minds are not equal to brains, being radically diverse in core characteristics.
To be appeared to redly on contemplating a bright red ball on a table is not at all the same as to have a sensor-processor suit that may detect say 680 nm radiation and then trigger a process that outputs on a screen, “red.”
Which is fairly obvious.
Beyond such contemplation, of course lies the associated conceptual dichotomising of the world into red ball A and rest of world NOT-A:
{ A | NOT-A }
Pictorially:
Which, deploys the first principles of right reason, not as pre-loaded algorithms, but as insightful, contemplative reasoning on the import and correlates of distinct identity, namely the first principle laws of identity, non-contradiction and excluded middle, which then point onwards to sufficient reason and causality, etc.
In our actual experience, no unaided rock will even output that chain as a set of glyphs, much less conceive it and conceive of the associated self evidence.
Where also, we are individually, directly self-aware and have no good reason to skeptically dismiss that others like us are similarly self aware. (FWIW, Zombie arguments are in effect reduction to absurdity arguments that underscore that we must distinguish minds and brains.)
Which brings to bear the force of Leibniz in his Monadology, on the telling analogy of the mill:
14. The passing condition which involves and represents a multiplicity in the unity, or in the simple substance, is nothing else than what is called perception. This should be carefully distinguished from apperception or consciousness . . . . 16. We, ourselves, experience a multiplicity in a simple substance, when we find that the most trifling thought of which we are conscious involves a variety in the object. Therefore all those who acknowledge that the soul is a simple substance ought to grant this multiplicity in the monad . . . . 17. It must be confessed, however, that perception, and that which depends upon it, are inexplicable by mechanical causes, that is to say, by figures and motions. Supposing that there were a machine whose structure produced thought, sensation, and perception, we could conceive of it as increased in size with the same proportions until one was able to enter into its interior, as he would into a mill. Now, on going into it he would find only pieces working upon one another, but never would he find anything to explain perception. It is accordingly in the simple substance, and not in the compound nor in a machine that the perception is to be sought. Furthermore, there is nothing besides perceptions and their changes to be found in the simple substance. And it is in these alone that all the internal activities of the simple substance can consist.
The logic that makes a mill a mill and not just bits and pieces grinding against each other at random, is of course, externally imposed by an intelligent agent. The mill reflects such functionally specific, complex organisation and associated information (FSCO/I) that on experience and analysis alike, we have every good reason to assign as a reliable sign of design. But also, it reflects the problem of the rock: rocks have no dreams, no contemplations, at most they can be used to effect a computation, symbolically or as material forces and components moving against one another.
In short, the design inference divide is pointing to profound worldview level issues.
But, one last contributory point, how can we conceive of an intelligent, conscious hybrid that has mind and brain working together?
ANS: The Eng Derek Smith two-tier controller cybernetic model:

Here, we see two tiers of control,
(i) an in the loop processor, and
(ii) a higher order supervisory controller
. . . that directs the lower order one which serves as its input-output interface to the loop and the external world.
I have no doubt that we may be able to create an artificial supervisory controller, programmed — by intelligent designers — at a higher level to take contingent alternative paths in algorithms of various sorts, but such a robotic entity is again a blend of the mechanical processing not materially different from the mill and perhaps some chance injections that randomise. What it is not, is what some find it ever so hard to acknowledge, never mind their own experience from moment to moment: an insightful, contemplative, aware, rational and responsible, choosing self, even one that is morally governed by the force of OUGHT.
(Which points onwards to the need for a foundational entity . . . IS . . . in our world capable of bearing the weight of a genuine OUGHT. For which, across time and many prolonged debates here at UD and over the centuries in the world of ideas, the only serious candidate is the inherently good Creator God who is a necessary and maximally great being. Something a priori evolutionary materialists are loathe to acknowledge, never mind how, even trivially, their system reduces to might and manipulation make right; for want of such a foundation. And if you would doubt that we re governed by ought, consider the point that it is self evidently wrong to kidnap, torture, rape and murder a child. So much so, that if we see such in progress we will instinctively strain every sinew to save the child from the monster. OUGHT is real.)
How can such a brain-body — mind interface be bridged?
This is a significant onward question, though the need for such an entity has to first be acknowledged.
There are many answers, the hylemorphic points to the issue that here is a form that the body etc conforms to which points beyond the merely material, there is the classic dualist who conceives of a substance termed spirit that inhabits the body, some point to subtle quantum influences that may intervene in neural networks.
I am not even sure that these are in mutual opposition, they may be along the lines of the old story of the six blind men groping around as they encounter an elephant.
But, in a sense that is an onward question, the key one is that we must firmly grasp that rocks have no dreams, it takes a mind to contemplate.
And from that, a whole world opens up for study, analysis, research and just maybe, onwards, synthesis. END