- Share
-
-
arroba
In recent days, this has been a hotly debated topic here at UD, raised by RDFish (aka AI Guy).
His key contention is perhaps best summarised from his remarks at 422 in the first understand us thread:
we do know that the human brain is a fantastically complex mechanism. We also know that in our uniform and repeated experience, neither humans nor anything else can design anything without a functioning brain.
I have responded from 424 on, noting there for instance:
But we do know that the human brain is a fantastically complex mechanism. We also know [–> presumably, have warranted, credibly true beliefs] that in our uniform [–> what have you, like Hume, locked out ideologically here] and repeated experience, neither humans nor anything else can design anything without a functioning brain.
That is, it seems that the phrasing of the assertion is loaded with some controversial assumptions, rather than being a strictly empirical inference (which is what it is claimed to be).
By 678, I outlined a framework for how we uses inductive logic in science to address entities, phenomena or events it did not or cannot directly observe (let me clean up a symbol):
[T]here is a problem with reasoning about how inductive reasoning extends to reconstructing the remote past. Let’s try again:
a: The actual past A leaves traces t, which we observe.
b: We observe a cause C that produces consequence s which is materially similar to t
c: We identify that on investigation, s reliably results from C.
d: C is the only empirically warranted source of s.
_____________________________e: C is the best explanation for t.
By 762, this was specifically applied to the design inference, by using substitution instances:
a: The actual past (or some other unobserved event, entity or phenomenon . . . ) A leaves traces t [= FSCO/I where we did not directly observe the causal process, say in the DNA of the cell], which we observe.
b: We observe a cause C [= design, or purposefully directed contingency] that produces consequence s [= directly observed cases of creation of FSCO/I, say digital code in software, etc] which is materially similar to t [= the DNA of the cell]
c: We identify that on empirical investigation and repeated observation, s [= FSCO/I] reliably results from C [= design, or purposefully directed contingency].
d: C [= design, or purposefully directed contingency] is ALSO the only empirically warranted source of s [= FSCO/I] .
_____________________________e: C [= design, or purposefully directed contingency] is the best explanation for t [= FSCO/I where we did not directly observe the causal process, say in the DNA of the cell], viewed here as an instance of s [= FSCO/I].
This should serve to show how the design inference works as an observationally based inductive, scientific exercise. That is an actually observed cause that is capable and characteristic of an effect can be reasonably inferred to be acting when we see the effect.
So, by 840, I summed up the case on mind and matter, using Nagel as a spring-board:
Underlying much of the above is the basic notion that we are merely bodies in motion with an organ that carries out computation, the brain. We are colloidal intelligences, and in this context RDF/AIG asserts confidently that our universal and repeated experience of the causing of FSCO/I embeds that embodiment.
To see what is fundamentally flawed about such a view, as I have pointed out above but again need to summarise, I think we have to start from the issue of mindedness, and from our actual experience of mindedness. For it simply does not fit the materialist model, which lacks an empirically warranted causal dynamic demonstrated to be able to do the job — ironically for reasons connected to the inductive evidence rooted grounds of the design inference. (No wonder RDF/AIG is so eager to be rid of that inconvenient induction.)
The mind, in this view is the software of the brain which, in effect by sufficiently sophisticated looping has become reflexive and self aware. This draws on the institutional dominance of the a priori evolutionary materialist paradigm in our day, but that means as well, that it collapses into the inescapable self-referential incoherence of that view. It also fails to meet the tests of factual adequacy, coherence and explanatory power.
Why do I say such?
First, let us observe a sobering point made ever so long ago by Haldane:
“It seems to me immensely unlikely that mind is a mere by-product of matter. For if my mental processes are determined wholly by the motions of atoms in my brain I have no reason to suppose that my beliefs are true. They may be sound chemically, but that does not make them sound logically. And hence I have no reason for supposing my brain to be composed of atoms. In order to escape from this necessity of sawing away the branch on which I am sitting, so to speak, I am compelled to believe that mind is not wholly conditioned by matter.” [[“When I am dead,” in Possible Worlds: And Other Essays [1927], Chatto and Windus: London, 1932, reprint, p.209.]
In essence, without responsible freedom (the very opposite of what would be implied by mechanical processing and chance) there is no basis for rationality, responsibility and capacity to think beyond the determination of the accidents of our programming. No to mention, there is no empirically based demonstration of the capability of blind chance and mechanical necessity to incrementally write the required complex software through incremental chance variations and differential reproductive success. All that is simply assumed, explicitly or implicitly in a frame of thought controlled by evolutionary materialism as an a priori. So, we have a lack of demonstrated causal adequacy problem right at the outset. (Not that that will be more than a speed-bump for those determined to proceed under the materialist reigning orthodoxy. But we should note that the vera causa principle has been violated, we do not have empirically demonstrated causal adequacy here. By contrast such brain software as is doubtless there, is blatantly chock full of FSCO/I, and the hardware involved is likewise chock full of the same. The only empirically warranted cause adequate to create such — whether or not RDF likes to bury it in irrelevancies — is design. We must not forget that inconvenient fact. [And we will in due course again speak to the issue as to whether empirical evidence warrants the conclusion that designing minds must be based on or require brains.])
A good second point is a clip from Malcolm Nicholson’s review of the eminent philosopher Nagel’s recent Mind and Cosmos:
If we’re to believe [materialism dominated] science, we’re made of organs and cells. These cells are made up of organic matter. Organic matter is made up chemicals. This goes all the way down to strange entities like quarks and Higgs bosons. We’re also conscious, thinking things. You’re reading these words and making sense of them. We have the capacity to reason abstractly and grapple with various desires and values. It is the fact that we’re conscious and rational that led us to believe in things like Higgs bosons in the first place.
But what if [materialism-dominated] science is fundamentally incapable of explaining our own existence as thinking things? What if it proves impossible to fit human beings neatly into the world of subatomic particles and laws of motion that [materialism-dominated] science describes? In Mind and Cosmos (Oxford University Press), the prominent philosopher Thomas Nagel’s latest book, he argues that science alone will never be able to explain a reality that includes human beings. What is needed is a new way of looking at and explaining reality; one which makes mind and value as fundamental as atoms and evolution . . . .
[I]t really does feel as if there is something “it-is-like” to be conscious. Besides their strange account of consciousness, Nagel’s opponents also face the classic problem of how something physical like a brain can produce something like a mind. Take perception: photons bounce off objects and hit the eye, cones and rods translate this into a chemical reaction, this reaction moves into the neurons in our brain, some more reactions take place and then…you see something. Everything up until seeing something is subject to scientific laws, but, somewhere between neurons and experience, scientific explanation ends. There is no fact of the matter about how you see a chair as opposed to how I see it, or a colour-blind person sees it. The same goes for desires or emotions. We can look at all the pieces leading up to experience under a microscope, but there’s no way to look at your experience itself or subject it to proper scientific scrutiny.
Of course philosophers sympathetic to [materialism-dominated] science have many ways to make this seem like a non-problem. But in the end Nagel argues that simply “the mind-body problem is difficult enough that we should be suspicious of attempts to solve it with the concepts and methods developed to account for very different kinds of things.”
In short, it is not just a bunch of dismissible IDiots off in some blog somewhere, here is a serious issue, one that cannot be so easily brushed aside and answered with the usual promissory notes on the inevitable progress of materialism-dominated science.
It is worth noting also, that Nagel rests his case on the issue of sufficiency, i.e. if something A is, why — can we not seek and expect a reasonable and adequate answer?
That is a very subtly powerful self-evident first principle of right reasoning indeed [cf. here on, again] and one that many objectors to say cosmological design on fine tuning would be wise to pay heed to.
Indeed, down that road lies the issue of contingency vs necessity of being, linked to the power of cause.
With the astonishing results that necessary beings are possible — start with the truth in the expression: 2 + 3 = 5 — and by virtue of not depending on on/off enabling causal factors, they are immaterial [matter, post E = m*c^2 etc, is blatantly contingent . . . ] and without beginning or end, they could not not-exist, on pain of absurdity. (If you doubt this, try ask yourself when did 2 + 3 = 5 begin to be true, can it cease from being so, and what would follow from denying it to be true. [Brace for the shock of what lurked behind your first lessons in Arithmetic!])
And, we live in a cosmos that is — post big bang, and post E = m*c^2 etc — credibly contingent, so we are looking at a deep causal root of the cosmos that is a necessary being.
Multiply by fine tuning [another significant little link with onward materials that has been studiously ignored above . . . ] and even through a multiverse speculation, we are looking at purpose, mind, immateriality, being without beginning or end, with knowledge, skill and power that are manifest in a fine tuned cosmos set up to facilitate C-chemistry aqueous medium cell based life.
{Let me add a summary diagram:}
That is — regardless of RDF’s confident manner, drumbeat declarations — it is by no means a universal, experience based conclusion that mind requires or is inevitably based on brains or some equivalent material substrate. (Yet another matter RDF seems to have studiously ignored.)
Nor are we finished with that review:
In addition to all the problems surrounding consciousness, Nagel argues that things like the laws of mathematics and moral values are real (as real, that is, as cars and cats and chairs) and that they present even more problems for science. It is harder to explain these chapters largely because they followed less travelled paths of inquiry. Often Nagel’s argument rests on the assumption that it is absurd to deny the objective reality, or mind-independence, of certain basic moral values (that extreme and deliberate cruelty to children is wrong, for instance) or the laws of logic. Whether this is convincing or not, depends on what you think is absurd and what is explainable. Regardless, this gives a sense of the framework of Nagel’s argument and his general approach.
Of course, the root premises here are not only true but self-evident: one denies them only at peril of absurdity.
A strictly materialistic world — whether explicit or implicit lurking in hidden assumptions and premises — cannot ground morals [there is no matter-energy, space-time IS that can bear the weight of OUGHT, only an inherently good Creator God can do that . . . ]. Similarly, such a world runs into a basic problem with the credibility of mind, as already seen.
Why, then should we even think this a serious option, given the inability to match reality, the self referential incoherence that has come out, and the want of empirically grounded explanatory and causal power to account for the phenomena we know from the inside out: we are conscious, self-aware, minded, reasoning, knowing, imagining, creative, designing creatures who find ourselves inescapably morally governed.
Well, when –as we may read in Acts 17 — Paul started on Mars Hill c AD 50 by exposing the fatally cracked root of the classical pagan and philosophical view [its publicly admitted and inescapable ignorance of the very root of being, the very first and most vital point of knowledge . . . ], he was literally laughed out of court.
But, the verdict of history is in: the apostle built the future.
It is time to recognise the fatal cracks in the evolutionary materialist reigning orthodoxy and its fellow travellers, whether or not they are duly dressed up in lab coats. Even, fancy ones . . .
It seems the time has come for fresh thinking. END
ADDENDUM, Oct 26th: The following, by Dr Torley (from comment 26), is so material to the issue that I add it to the original post. It should be considered as a component of the argument in the main:
_________
>>My own take on the question is as follows:
(a) to say that thinking requires a brain is too narrow, for two reasons:
(i) since thinking is the name of an activity, it’s a functional term, and from a thing’s function alone we cannot deduce its structure;
(ii) the argument would prove too much, as it would imply that Martians (should we ever find any) must also have brains, which strikes me as a dogmatic assertion;
(b) in any case, the term “brain” has not been satisfactorily defined;
(c) even a weaker version of the argument, which claims merely that thinking requires an organized structure existing in space-time, strikes me as dubious, as we can easily conceive of the possibility that aliens in the multiverse (who are outside space-time) might have created our universe;
(d) however, the “bedrock claim” that thinking requires an entity to have some kind of organized structure, with distinct parts, is a much more powerful claim, as the information created by a Designer is irreducibly complex, and it seems difficult to conceive of how such an absolutely simple entity could create something irreducibly complex, or how such an entity could create, store and process various kinds of complex information in the absence of parts (although one might imagine that it could store such information off-line);
(e) however, all the foregoing argument shows that the Designer is complex: what it fails to show is that the Designer exists in space-time, or has a body that can be decomposed into separate physical parts;
(f) for all we know, the Designer might possess a different kind of complexity, which I call integrated complexity, such that the existence of any one part logically implies the existence of all the other parts;
(g) since the parts of an integrated complex being would be inseparable, there would be no need to explain what holds them together, and thus no need to say that anyone designed them;
_______________________________________
(h) thus even if one rejected the classical theist view that God is absolutely simple, one could still deduce the existence of a Being possessing integrated complexity, and consistently maintain that integrated complexity is a sufficient explanation for the irreducible complexity we find in Nature;
(i) in my opinion, it would be a mistake for us to try to resolve the question of whether the Designer has parts before making the design inference, as that’s a separate question entirely. >>
__________
The concept of integrated, inseparable complexity is particularly significant.
____________
ADDENDUM 2: A short note on Bayes’ Theorem clipped from my briefing note, as VJT is using Bayesian reasoning explicitly below:
We often wish to find evidence to support a theory, where it is usually easier to show that the theory [if it were for the moment assumed true] would make the observed evidence “likely” to be so [on whatever scale of weighting subjective/epistemological “probabilities” we may wish etc . . .].
So in effect we have to move: from p[E|T] to p[T|E], i.e from“probability of evidence given theory”to“probability of theory given evidence,” which last is what we can see. (Notice also how easily the former expression p[E|T] “invites” the common objection that design thinkers are “improperly” assuming an agent at work ahead of looking at the evidence, to infer to design. Not so, but why takes a little explanation.)
Let us therefore take a quick look at the algebra of Bayesian probability revision and its inference to a measure of relative support of competing hypotheses provided by evidence:
a] First, look at p[A|B] as the ratio, (fraction of the time we would expect/observe A AND B to jointly occur)/(fraction of the the time B occurs in the POPULATION).
–> That is, for ease of understanding in this discussion, I am simply using the easiest interpretation of probabilities to follow, the frequentist view.
b] Thus, per definition given at a] above:
p[A|B] = p[A AND B]/p[B],
or, p[A AND B] = p[A|B] * p[B]
c] By “symmetry,” we see that also:
p[B AND A] = p[B|A] * p[A],
where the two joint probabilities (in green) are plainly the same, so:
p[A|B] * p[B] = p[B|A] * p[A],
which rearranges to . . .
d] Bayes’ Theorem, classic form:
p[A|B] = (p[B|A] * p[A]) / p[B]
e] Substituting, E = A, T = B, E being evidence and T theory:
p[E|T] = (p[T|E] * p[E])/ p[T],
p[T|E] — probability of theory (i.e. hypothesis or model) given evidence seen — being here by initial simple “definition,” turned into L[E|T] by defining L[E|T] = p[T|E]:
L[E|T] is (by definition) the likelihood of theory T being “responsible” for what we observe, given observed evidence E [NB: note the “reversal” of how the “|” is being read]; at least, up to some constant. (Cf. here, here, here, here and here for a helpfully clear and relatively simple intro. A key point is that likelihoods allow us to estimate the most likely value of variable parameters that create a spectrum of alternative probability distributions that could account for the evidence: i.e. to estimate the maximum likelihood values of the parameters; in effect by using the calculus to find the turning point of the resulting curve. But, that in turn implies that we have an “agreed” model and underlying context for such variable probabilities.)
Thus, we come to a deeper challenge: where do we get agreed models/values of p[E] and p[T] from?
This is a hard problem with no objective consensus answers, in too many cases. (In short, if there is no handy commonly accepted underlying model, we may be looking at a political dust-up in the relevant institutions.)
f] This leads to the relevance of the point that we may define a certain ratio,
LAMBDA = L[E|h2]/L[E|h1],
This ratio is a measure of the degree to which the evidence supports one or the other of competing hyps h2 and h1. (That is, it is a measure of relative rather than absolute support. Onward, as just noted, under certain circumstances we may look for hyps that make the data observed “most likely” through estimating the maximum of the likelihood function — or more likely its logarithm — across relevant variable parameters in the relevant sets of hypotheses. But we don’t need all that for this case.)
g] Now, by substitution A –> E, B –> T1 or T2 as relevant:
p[E|T1] = p[T1|E]* p[E]/p[T1],
and
p[E|T2] = p[T2|E]* p[E]/p[T2];
so also, the ratio:
p[E|T2]/ p[E|T1]
= {p[T2|E] * p[E]/p[T2]}/ {p[T1|E] * p[E]/p[T1]}
= {p[T2|E] /p[T2]}/ {p[T1|E] /p[T1]} = {p[T2|E] / p[T1|E] }*{p[T1]/p[T2]}
h] Thus, rearranging:
p[T2|E]/p[T1|E] = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}
i] So, substituting L[E|Tx] = p[Tx|E]:
L[E|T2]/ L[E|T1] = LAMBDA = {p[E|T2]/ p[E|T1]} * {P(T2)/P(T1)}
Thus, the lambda measure of the degree to which the evidence supports one or the other of competing hyps T2 and T1, is a ratio of the conditional probabilities of the evidence given the theories (which of course invites the “assuming the theory” objection, as already noted), times the ratio of the probabilities of the theories being so. [In short if we have relevant information we can move from probabilities of evidence given theories to in effect relative probabilities of theories given evidence, and in light of an agreed underlying model.]
Of course, therein lieth the rub.