Uncommon Descent Serving The Intelligent Design Community
Category

ID Foundations

Foundational concepts and evidence for inferring design in light of empirically tested, reliable, observable signs

They said it: “atheism is simply the absence of belief that any deities exist” — a fatal worldview error of modern evolutionary materialist atheism

Prof. Dawkins of the UK, a leading evolutionary materialist and atheist

It is an open secret that a major motivation for the commonly encountered, too often angry  rejection of  the design inference is a prior commitment to Lewontinian evolutionary materialistic atheism; a common thread that unites a Sagan, a Lewontin, many members of Science institutions and Faculties of Universities, and of course many leading anti-design advocates like those associated with the US-based National Center for Science Education [NCSE], as well as leading “science” [–> atheism] blogs and Internet forums and the like.

Such atheists also often imagine that they have cornered the market on scientific rationality, common-sense and intelligence, to the point where professor Dawkins of the UK has proposed a new name for atheists: “brights.”

By contrast, he and many others of like ilk view those who object to such views as “ignorant, stupid, insane or . . . wicked.” (Perhaps, that is why one of the atheistical objectors to UD feels free to publicly and falsely accuse me of being a demented child abuser and serial rapist. He clearly cannot see how unhinged, unreasonable, irrational, uncouth, vulgar and rage-blinded his outrageous behaviour is.) Read More ›

ID Foundations, 9: Cause, necessity/contingency vs. sufficiency/determinism, the observed (fine tuned . . . ) cosmos and design theory

"Turtles, all the way down . . . " vs a root cause

In recent exchanges, design objector RH7, has made objections to the concept of cause, regarding it as an outmoded, deterministic and classical (in the bad sense) view.

Since this is now clearly yet another line of objection to design inference on detection of credible causal factors, we need to add a response to this to the cluster of ID Foundations posts here at UD.

A useful way to do so is to highlight an ongoing exchange, here on, in the Universe Portal thread:

JDFL: 20th century physics has called into question determinism. But determinism and causality are not necessarily the same thing. we may not be able to determine or predict an qm outcome but we can identify the set of causal factors. [T]he unity of the set of causal factors is the cause.

KF: JDFL: You are right, once we see the significance of necessary causal factors, we decouple cause from determinism.

RH7: Cites JDFL & responds:

we may not be able to determine or predict an qm outcome but we can identify the set of causal factors. the unity of the set of causal factors is the cause.

Well that’s the problem. Not only can we not determine the outcome, we can not definitively know the cause. As an alternative, Bohm’s quantum mechanics is deterministic and non-local – though I’m not sure you would find his idea of a universal wave function any better.

This sets up my own response: Read More ›

ID Foundations, 8: Switcheroo — the error of asserting without adequate observational evidence that the design of life (from OOL on) is achievable by small, chance- driven, success- reinforced increments of complexity leading to the iconic tree of life

Algorithmic hill-climbing first requires a hill . .

[UD ID Founds Series, cf. Bartlett on IC]

Ever since Dawkins’ Mt Improbable analogy, a common argument of design objectors has been that such complex designs as we see in life forms can “easily” be achieved incrementally, by steps within plausible reach of chance processes, that are then stamped in by success, i.e. by hill-climbing. Success, measured by reproductive advantage and what used to be called “survival of the fittest.”

[Added, Oct 15, given a distractive strawmannisation problem in the thread of discussion:  NB: The wide context in view, plainly,  is the Dawkins Mt Improbable type hill climbing, which is broader than but related to particular algorithms that bear that label.]

Weasel’s “cumulative selection” algorithm (c. 1986/7) was the classic — and deeply flawed, even outright misleading — illustration of Dawkinsian evolutionary hill-climbing.

To stir fresh thought and break out of the all too common stale and predictable exchanges over such algorithms, let’s put on the table a key remark by Stanley and Lehman, in promoting their particular spin on evolutionary algorithms, Novelty Search:

. . . evolutionary search is usually driven by measuring how close the current candidate solution is to the objective. [ –> Metrics include ratio, interval, ordinal and nominal scales; this being at least ordinal] That measure then determines whether the candidate is rewarded (i.e. whether it will have offspring) or discarded. [ –> i.e. if further moderate variation does not improve, you have now reached the local peak after hill-climbing . . . ] In contrast, novelty search [which they propose] never measures progress at all. Rather, it simply rewards those individuals that are different.

Instead of aiming for the objective, novelty search looks for novelty; surprisingly, sometimes not looking for the goal in this way leads to finding the goal [–> notice, an admission of goal- directedness . . . ] more quickly and consistently. While it may sound strange, in some problems ignoring the goal outperforms looking for it. The reason for this phenomenon is that sometimes the intermediate steps to the goal do not resemble the goal itself. John Stuart Mill termed this source of confusion the “like-causes-like” fallacy. In such situations, rewarding resemblance to the goal does not respect the intermediate steps that lead to the goal, often causing search to fail . . . .

Although it is effective for solving some deceptive problems, novelty search is not just another approach to solving problems. A more general inspiration for novelty search is to create a better abstraction of how natural evolution discovers complexity. An ambitious goal of such research is to find an algorithm that can create an “explosion” of interesting complexity reminiscent of that found in natural evolution.

While we often assume that complexity growth in natural evolution is mostly a consequence of selection pressure from adaptive competition (i.e. the pressure for an organism to be better than its peers), biologists have shown that sometimes selection pressure can in fact inhibit innovation in evolution. Perhaps complexity in nature is not the result of optimizing fitness, but instead a byproduct of evolution’s drive to discover novel ways of life.

While their own spin is not without its particular problems in promoting their own school of thought — there is an unquestioned matter of factness about evolution doing this that is but little warranted by actual observed empirical facts at body-plan origins level, and it is by no means a given that “evolution” will reward mere novelty —  some pretty serious admissions against interest are made.

Read More ›

Michael Shermer of Skeptic magazine vs. “turtles all the way down . . .”

UD’s resident journalist, Mrs Denise O’Leary, notes on how Mr Michael Shermer of Skeptic Magazine and Scientific American (etc.) has written on his new book, The Believing Brain: Why Science Is the Only Way Out of Belief-Dependent Realism:

. . . skepticism is a sine qua non of science, the only escape we have from the belief-dependent realism trap created by our believing brains.

While critical awareness — as opposed to selective hyperskepticism — is indeed important for serious thought in science and other areas of life, Mr Shermer hereby reveals an unfortunate ignorance of basic epistemology, the logic of warrant and the way that faith and reason are inextricably intertwined in the roots of our worldviews.

To put it simply, he has a “turtles all the way down” problem:

"Turtles, all the way down . . . "

Read More ›

mapofdam
An arched beaver dam (with a second one downstream)

Beavers as designers (are they intelligent?)

A Beaver Dam

Beaver dams are amazing objects in our natural environment, being shaped from piles of felled trees and stones arranged to block streams and create ponds that protect these busy rodents [easily up to 50 – 60 lbs, over 100 lbs on record] from predators, allow them to build their lodges,  and provide watery highways for them to move about as they do their business. The dams range up to nearly 3,000 feet [a bit under 1 km] in length, and up to 7 ft [2+ m] at base and 14 ft [nearly 5 m] in height. Consequently, the beavers are keystone creatures, affecting the water table, providing handy bridges used by many animals, reducing the tendency of streams to flood, providing refuges for trout and young salmon, and eventually creating characteristic meadows as the ponds silt up. Read More ›

match_ignites
An igniting match (a contingent being)

“Who designed the designer” vs. a burning matchstick

An igniting match (a contingent being)

In the current “who designed the designer” rebuttal thread, NR has posted an objection that inadvertently exposes the core errors of this objection by Dawkins. While I responded in that thread, I think the issue is sufficiently material to also be posted in its own right.

So, pardon the following:

______________

NR, 12: >> The question “Who designed the designer” is intended as a rhetorical question. An actual answer is not expected.

The purpose of raising that question is to show that the argument “It is complex, therefore it must have been designed” will lead to an infinite regression.

I don’t see that your “demolition” has done anything to avoid that infinite regression problem.>>

KF, 27 – 28 as adjusted: >> NR thanks for your inadvertent rhetorical favour. Read More ›

ARW in 1869.Small_
Alfred Russel Wallace (1869)

ID Foundations, 7: suppressed history — Alfred Russel Wallace’s Intelligent Evolution as a precursor to modern design theory

One of the saddest facets of the modern, unfortunately poisonously polarised debates over origins science, is the evident suppression (yes, suppressed: at top level, people are responsible to give a true, fair, balanced view of an important matter based on the due diligence of thorough and balanced research . . . ) of relevant history, such as Alfred Russel Wallace's Intelligent Evolution. Read More ›
400px-Water_cycle
The water cycle: key to a viable terrestrial planet

ID Foundations, 6: Introducing* the cosmological design inference

ID 101/Foundations, 6: Introducing and explaining the cosmological design inference on fine tuning, with onward reference links (including on Stenger's attempted rebuttals) Read More ›

She said it: Nancy Pearcey’s thoughtful article on how “Christianity is a Science-starter, not a Science-stopper”

One of the most common objections to design thought is the idea that it is about the improper injection of the alien  supernatural into the world of science. (That is itself based on a strawman misrepresentation of design thought, as was addressed here a few days ago.)

However, there is an underlying root, a common distortion of the origins of modern science, which Nancy Pearcey rebutted in a  2005 sleeper article as headlined, that deserves a UD post of its own.

Let’s clip the article:

Read More ›

He said it: Prof Lewontin’s strawman “justification” for imposing a priori materialist censorship on origins science

Yesterday, in the P Z Myers quote-mining and distortion thread, I happened to cite Lewontin’s infamous 1997 remark in his NYRB article, “Billions and Billions of Demons,” on a priori imposition of materialist censorship on origins science, which reads in the crucial part:

It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.

To my astonishment, I was promptly accused of quote-mining and even academic malpractice, because I omitted the following two sentences, which — strange as it may seem —  some evidently view as justifying the above censoring imposition:

The eminent Kant scholar Lewis Beck used to say that anyone who could believe in God could believe in anything. To appeal to an omnipotent deity is to allow that at any moment the regularities of nature may be ruptured, that miracles may happen.

To my mind, instead, these last two sentences are such a sad reflection of bias and ignorance, that their omission is an act of charity to a distinguished professor. Read More ›

EMTs_at_work

ID Foundations, 5: Functionally Specific, Complex Organization and associated Information as empirically testable (and recognised) signs of design

(ID Foundations series so far: 1, 2, 3, 4 )

In a current UD discussion thread, frequent commenter MarkF (who supports evolutionary materialism) has made the following general objection to the inference to design:

. . . my claim is not that ID is false. Just that is not falsifiable. On the other hand claims about specific designer(s)with known powers and motives are falsifiable and, in all cases that I know of, clearly false.

The objection is actually trivially correctable.

Not least,  as we — including MF — are designers who routinely leave  behind empirically testable, reliable signs of design, such as posts on UD blog in English that (thanks to the infinite monkeys “theorem” as discussed in post no 4 in this series)  are well beyond the credible reach of undirected chance and necessity on the gamut of the observed cosmos. For instance, the excerpt just above uses 210 7-bit ASCII characters, which specifies a configuration space of 128^210 ~ 3.26 * 10^442 possible bit combinations. The whole observable universe, acting as a search engine working at the fastest possible physical rate [10^45 states/s, for 10^80 atoms, for 10^25 s: 10^150 possible states] , could not scan as much as 1 in 10^ 290th of that.

That is, any conceivable chance and necessity based search on the scope of our cosmos would very comfortably round down to a practical zero. But MF as an intelligent and designing commenter, probably tossed the above sentences off in a minute or two.

That is why such functionally specific, complex organisation and associated information [FSCO/I] are credible, empirically testable and reliable signs of intelligent design.

But don’t take my word for it.

A second UD commenter, Acipenser (= s[t]urgeon), recently challenged BA 77 and this poster as follows, in the signs of scientism thread:

195: What does the Glasgow Coma scale measure? The mind or the body?

206: kairosfocus: What does the Glasgow Coma scale measure? Mind or Body?

This is a scale of measuring consciousness that as the Wiki page notes, is “used by first aid, EMS, and doctors as being applicable to all acute medical and trauma patients.” That is, the scale tests for consciousness. And –as the verbal responsiveness test especially shows — the test is an example of where the inference to design is routinely used in an applied science context, often in literal life or death situations:

Fig. A: EMT’s at work. Such paraprofessional medical personnel routinely test for the consciousness of patients by rating their capacities on eye, verbal and motor responsiveness, using the Glasgow Coma Scale, which is based on an inference to design as a characteristic behaviour of conscious intelligences. (Source: Wiki.)

In short, the Glasgow Coma Scale [GCS] is actually a case in point of the reliability and scientific credibility of the inference to design; even in life and death situations.

Why do I say that?

Read More ›

search_window

ID Foundations, 4: Specified Complexity and linked Functional Organisation as signs of design

(NB: ID Foundations Series, so far: 1, 2, 3.)

In a recent comment on the ID Foundations 3 discussion thread, occasional UD commenter LastYearOn [henceforth LYO], remarked:

Behe is implicitly assuming that natural processes cannot explain human technology. However natural processes do explain technology, by explaining humans. We may think of computers as somehow distinct from objects that formed from the direct result of natural process. And in important ways they are. But that doesn’t mean that they aren’t ultimately explainable naturally. Behe’s argument is therefore circular.

Think of it this way. In certain ways nature is prone to organization. From cells to multicellular organisms to humans. Computers are just the latest example.

In essence, LYO is arguing — yea, even, confidently  assuming — that since nature has the capacity to spontaneously generate designers through evolutionary means, then technology and signs of design reduce to blind forces and circumstances of chance plus necessity in action. Thus, when we behold, say a ribosome in action —

Fig. A: The Ribosome in action in protein translation, assembling (and then completing) a protein step by step [= algorithmically] based on the sequence of three-letter codons in the  mRNA tape and using tRNA’s as amino acid “taxis” and position-arm tool-tips, implementing a key part of a von Neumann-type self replicator . (Courtesy, Wikipedia.)

___________________

. . . we should not think, digitally coded, step by step algorithmic process, so on signs of design, design. Instead, LYO and other evolutionary materialists argue that we should think: here is an example of the power of undirected chance plus necessity to spontaneously create a complex functional entity that is the basis for designers as we observe them, humans.

So, on the evolutionary materialistic view, the triad of explanatory causes, necessity, chance, art, collapses into the first two. Thus, signs of design such as specified complexity and associated highly specific functional organisation — including that functional organisation that happens to be irreducibly complex — reduce to being evidences of the power of chance and necessity in action!

Voila, design is finished as an explanation of origins!

But is this assumption or assertion credible?

No . . . it fallaciously begs the question of the underlying power of chance plus necessity, thus setting up the significance of the issue of specified complexity as an empirically reliable sign of design. No great surprise there. But, the issue also opens the door to a foundational understanding of the other hotly contested core ID concept, specified complexity.

Read More ›