Uncommon Descent Serving The Intelligent Design Community

Dembski on design detection in under three minutes

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From David Klinghoffer at Evolution News & Views:

We last checked in with Robert Lawrence Kuhn of the PBS series Closer to Truth as he interviewed physicist and Nobel laureate Brian Josephson who said he was “80 percent” sure of intelligent design. (BOOM.)

These aren’t brand new interviews by Kuhn, but still very interesting – and concise. Now, submitted for your Labor Day enjoyment, here’s one, pointed out by a Facebook friend, with mathematician William Dembski. Dr. Dembski briefly defines the method of detecting intelligent design. It is, he says, a form of triangulation on the effects of intelligence, namely contingency, complexity, and specification. The last of those refers to the question of “Does it conform to some sort of independently given pattern?”

Kuhn, not an ID proponent as far as I know, shrewdly notes that ID doesn’t seek to “prove” God, of the Bible or any other gods, but it is consistent with what you’d expect from a Deity. I find that distinction to be stubbornly lost on many ID critics. More.

Of course it’s lost on them! Dealing with the realities of design in nature would be conceptual work. By contrast, anyone can run around shrieking through a loud-hailer and stirring up a base. And they can often get government to fund it.

Who knows, one of these days, the jig may be up.

See also: Data basic

Comments
@KF, that's very interesting. Would you mind explaining further why infinite resources cannot generate FSCO/I? Couldn't it be part of the cycle you mention?EricMH
September 15, 2017
September
09
Sep
15
15
2017
04:18 AM
4
04
18
AM
PDT
EMH, kindly cf 53 above; resources relative to configuration space i/l/o the constraint of coherent functionality is a highly material factor. And no, an implicit quasi infinite world in which available resources are not a constreaint is not relevant. Indeed, the answers you seek have been there in stat mech for 100+ years: an ensemble of unlimited size and duration will circulate through every possible state; in a cycle. KFkairosfocus
September 15, 2017
September
09
Sep
15
15
2017
01:53 AM
1
01
53
AM
PDT
@daveS, yes, Chaitin's constant is completely incalculable. It is the proportion of an infinite number of bitstrings and which ones halt. Halting cannot be determined in such a general scenario due to the halting problem. To your point, yes, if we know a priori the proportion of functional bitstrings then we do not need to worry about Chaitin's constant. So, in a biological scenario this may not be an issue. In a broader setting this may still be a problem, and then that leaves the background for the biological scenario still ill defined, with a potentially naturalistic source of FSCO/I. In which case, FSCO/I may be adequate to eliminate a particular proximate naturalistic theory, but insufficient to eliminate a global naturalistic theory. Which still leaves the door open to a proximate naturalistic cause, so FSCO/I is not sufficient to rule out methodological naturalism which there is great practical motivation to maintain due to its great success historically.EricMH
September 14, 2017
September
09
Sep
14
14
2017
05:33 PM
5
05
33
PM
PDT
EricMH,
yes the functional bit strings halt. Part of the problem is that we cannot know the proportion that halt, which is an incalculable number known as Chaitin’s constant. Further, of those that do halt, it is unclear what proportion expand vs contract when run on a TM.
Interesting---I've read snippets about Chaitin's constant in the past, but I don't know enough about the field to recognize it. I take it one cannot even find useful bounds for these numbers? I wonder if this poses a problem for the thought experiment though? Suppose for the particular TM you chose, 90% (or even 0.9%) of bitstrings are functional. I suspect that an ID proponent might respond by saying that such a high percentage is completely unrealistic and could never occur in situations where FSCO/I is actually applied, for example in biology.daveS
September 14, 2017
September
09
Sep
14
14
2017
04:52 PM
4
04
52
PM
PDT
@daveS, yes the functional bit strings halt. Part of the problem is that we cannot know the proportion that halt, which is an incalculable number known as Chaitin's constant. Further, of those that do halt, it is unclear what proportion expand vs contract when run on a TM. @KF, as I mention to daveS, an issue is that we cannot know the proportion that are functional, i.e. halting programs.EricMH
September 14, 2017
September
09
Sep
14
14
2017
04:25 PM
4
04
25
PM
PDT
EMH, the problem of the scale of the cosmos is central to the design inference. Further, by the very nature of bit strings, the config space of possibilities is a sharply peaked binomial distribution centred on 50-50 1/0, with the overwhelmingly dominant group being gibberish. FSCO/I is necessarily sparsely distributed because of the highly specific configuration requirements to attain function. To see this, imagine a 6500 C3 reel disassembled in a bait bucket and shaken up. The number of non-functional configs will utterly overwhelm the functional ones. and, that's for a clumped case. Include scattering across the Sol system and you will see the point with even greater force. KFkairosfocus
September 14, 2017
September
09
Sep
14
14
2017
03:41 PM
3
03
41
PM
PDT
EricMH, Thanks, that's helpful. Just to confirm that I understand the concept, the "functional" bit strings are those that cause the Turing machine to halt, is that correct? Another basic question---considering the collection of TM's you describe in #48, those that halt only if the output is sufficiently longer than the input---does the proportion of bitstrings that cause halting vary? That is, for some of the TM's, is it relatively easy to find a bitstring which causes halting, while for others, it's much harder? Basically I'm asking, assuming my understanding in the first paragraph is correct, whether function among bitstrings can be rare or common, depending on the TM.daveS
September 14, 2017
September
09
Sep
14
14
2017
01:33 PM
1
01
33
PM
PDT
@KF, I see, so as long as the required Turing machine is too complex for the universe's probabilistic resources, then it is not a valid counter. However, if there were enough probabilistic resources to create the Turing machine, would that present a problem? One hypothetical way is in some sort of multiverse scenario, or by appealing to Turing complete cellular automata such as Rule 110, which is fairly small. Of course the surrounding machinery that makes the cellular automata work is probably not very small. @daveS, yes any Turing complete language will do. Let's say the language is Python, and I eval randomly assembled ASCII strings. Some of those strings are programs. Some of the programs halt. Some of the halting programs will output an ASCII string that is longer that the original program. We would also need to use a dovetailing procedure, since we do not know a priori which ASCII strings do halt, and do not want to get stuck. So, each time we add a new ASCII string to our pool, we eval each ASCII string in the pool one time step, unless the string has been eliminated already by one of our criteria. Eventually, this procedure is guaranteed to find the ASCII strings that halt with longer outputs. This whole process can be written as a fairly short Python program.EricMH
September 14, 2017
September
09
Sep
14
14
2017
12:50 PM
12
12
50
PM
PDT
EricMH, I'd like to consider your thought experiment, but this is over my head. Could I rephrase it in terms of a computer program (which I'm slightly more familiar with), such as Perl or Common Lisp? That is, I assume we have created a program which takes as inputs finite-length bit strings, and outputs another bit string, and ends only if the output's length exceeds the input length by some prespecified amount. My sketchy understanding is that any Turing machine can be simulated by a program in either of these languages (and vice versa), so hopefully this reformulation causes no problems.daveS
September 14, 2017
September
09
Sep
14
14
2017
10:36 AM
10
10
36
AM
PDT
EMH, "I create a Turing machine . . . " itself fatally undermines the claim. KF PS: If I were to search the config space for 500 bits, I would find EVERY meaningful 72 letter ASCII string. The problem is the search resources of the Sol system are grossly insufficient for such on reasonably available time. This is similar to the statistical reasoning behind the 2nd law of thermodynamics.kairosfocus
September 14, 2017
September
09
Sep
14
14
2017
10:10 AM
10
10
10
AM
PDT
@KF, here's a counter argument that non-intelligent processes can create FSCO/I. I create a Turing machine such that it only halts if the length of the output is longer than the length of the input by a large enough amount. Then I feed it random bitstrings until it halts on one. This Turing machine is guaranteed to halt, since highly compressible bitstrings exist, and will happen upon one sooner or later. Since the output bitstring is highly compressible, it will have a simple description, despite being a long bitstring. Such a bitstring seems to have large amounts of FSCO/I, yet it is produced purely through chance and necessity. What's wrong with this thought experiment?EricMH
September 14, 2017
September
09
Sep
14
14
2017
05:58 AM
5
05
58
AM
PDT
RVB8, 36: It is -- long since, sadly -- patent that you think rhetoric, accusation, word-twisting and strawman mischaracterisations can substitute for sober, sound discussion as meeting of serious minds:
You do realise that biological scientists refer to ‘information’, purely as the possible phenotypes produced by DNA. They don’t actually misuse the term as grossly as ID has always done, since the golden age of Dembski’s posts. Scientists also talk of ‘design’ in nature, without in any way conflating their meaning, with IDs intentional designer tinkering.
First, Shannon's metrics, strictly, are about information-carrying capacity. As his context was things like telephony, that is utterly unsurprising. If you would humble yourself enough to read Sect A my always linked through my name, you would find enough food for thought to correct the misperceptions beneath the accusations presented as fact as I just clipped. For instance, information proper has to do with coherent configurations that are meaningful and often functional in ways depending on an underlying organisational pattern. There is a world of difference between a pile of car parts and a working auto. Such functional organisation can be reduced to its information content by designing a description language that pivots on a sophisticated version of the old 20 questions, Animal, vegetable or mineral game. By imposing a structured pattern of Y/N questions, of sufficient length, essentially any entity can be specifically described. That's what AutoCAD etc do, e.g. the wiremesh description of a gear, then the exploded view description of say an ABU 6500 C3 reel; which I have used for years as an iconic case of understandable functionally specific complex organisation and associated information, FSCO/I for short. Of course, such a file is measured in binary digits, and function based on coherent organisation can be readily evaluated. In short, contrary to far too much selectively hyperskeptical stubbornly dismissive and denigratory rhetoric on the part of objectors to the design inference, FSCO/I, the functionally observable form of complex specified information, is real. Further, simply observing something like the functional organisation of a living cell and of the biochemical reaction network will immediately demonstrate to all but those in willful denial, its relevance to biology. I have long since pointed 0ut the comparison of a petroleum refinery's units, piping and instrument diagram with the reaction network of the living cell -- which is often posted as a wall-sized chart in Uni Biochem Depts. It is easy to show that beyond a reasonable threshold of 500 - 1,000 bits, no blind search on the gamut of the Sol system [~10^57 atoms, 10^17 s, 10^12 - 14 Chemical level rxns/atom/sec], or the observed cosmos [~ 10^80 atoms, 10^17 s] is credibly likely to succeed in finding the deeply isolated islands of functionally coherent configurations in the space of lumped or scattered configurations. That space of possibilities is 10^150 - 10^301, and the scope of feasible search is dozens of orders of magnitude below the thresholds. Nor is a search for a golden search that gets around that reasonable: searches are subsets, and the space of searches for a config space of scale n comes from its power set, of order 2^n. n starts at 10^150. So, it is utterly unsurprising that on trillions of directly observed cases the ONLY observed cause of FSCO/I is intelligently directed configuration, i.e. design. Where also, we may profitably observe an entity and its FSCO/I, reverse-engineering it to see how it comes to have functional coherence. That is, we may examine a design based on observing an entity closely. Design as observed as beyond the FSCO/I threshold then demands explanation as to credible best explanation. As a result, we may observe -- a level 2 observation -- that the only empirically and analytically plausible actually observed cause of FSCO/I is design. Here, a verb describing a two-phase process: describing the functional configuration in a description language amenable to construction, then actual construction under control of that FSCO/I. Further to this, designs are inherently purpose-driven, i.e. constrained by performance targets that guide functionally coherent contrivance. This leads to the inference to design as empirically warranted causal process on noting FSCO/I, by way of inference to best current explanation on tested, reliable sign. Onward discussion as to the observed cause of designs is a third order question: designs imply purpose and intent is a characteristic of agency. That is, designs point to designers but the design by itself may not allow us to infer identity beyond having adequate capability. So, on fair comment: the above imagined difficulties and projections of incompetence or worse are artifacts of stubborn resistance to cogent evidence and linked prudent rational, provisional -- inference to best current explanation -- inference. Which is the sort of inference Science can attain to. Especially, when seeking to reconstruct the inherently unobservable deep past of origins. We were not there, at best we can seek to provisionally reconstruct based on its evident traces. So, it is high time to drop the stubborn rhetoric of resistance, improper dismissal and denigration, allowing FSCO/I to speak for itself. KFkairosfocus
September 14, 2017
September
09
Sep
14
14
2017
02:01 AM
2
02
01
AM
PDT
EricMH, Thanks for the link; that looks like a fascinating paper. If we're talking about the NP Hardness Assumption, then that does seem plausible to me.daveS
September 12, 2017
September
09
Sep
12
12
2017
07:24 AM
7
07
24
AM
PDT
@daveS, computer scientist Scott Aaronson proposed the "NP Hardness Assumption" which states "NP-complete problems are intractable in the physical world." https://www.scottaaronson.com/papers/npcomplete.pdfEricMH
September 12, 2017
September
09
Sep
12
12
2017
06:53 AM
6
06
53
AM
PDT
rvb8:
They don’t actually misuse the term as grossly as ID has always done,
More evidence-free tropeET
September 12, 2017
September
09
Sep
12
12
2017
03:04 AM
3
03
04
AM
PDT
rvb8:
They are talented entertaining writers of the calibre of Dawkins, Coyne, and Neil Shubin etc. All world class, respected theorists and experimenters.
And yet not one of them can support the claims they make with respect to evolution. That means their "talent" is in their story-telling and not science. And that is why they are your buddies- they are also self-deluded story-tellers who couldn't support their claims scientifically if their lives depended on it.ET
September 12, 2017
September
09
Sep
12
12
2017
03:02 AM
3
03
02
AM
PDT
OT https://www.sciencedaily.com/releases/2017/09/170911122628.htmes58
September 12, 2017
September
09
Sep
12
12
2017
02:20 AM
2
02
20
AM
PDT
EricMH, The first question that comes to my mind is, how do you know you can always construct a computer using any such physical process? I guess it's obvious that if you could construct a physical computer capable of computing a non-Turing-computable function, then there would have to be physical processes which could not be "modeled" with Turing machine, but the converse is less clear to me. Has anyone written this up and published it? I see a few semi-related things on various versions of the Church-Turing Thesis, but nothing quite like the quote I posted above.daveS
September 11, 2017
September
09
Sep
11
11
2017
11:56 PM
11
11
56
PM
PDT
@daveS, if some physical process could not be modeled by a Turing machine, then we could build computers using this physical process to compute the uncomputable. So far all such claims turn out to be snake oil.EricMH
September 11, 2017
September
09
Sep
11
11
2017
09:47 PM
9
09
47
PM
PDT
@rvb8, the proof shows DE cannot increase mutual information between DNA and phenotypes, if DE is not directed towards said mutual information.EricMH
September 11, 2017
September
09
Sep
11
11
2017
09:46 PM
9
09
46
PM
PDT
It is one thing to 'describe' a process in relation to a Turing machine.... quite another to explain it. Seems that the fearful ID opponents are grasping once again....and rational thought eludes them....wait...rational thought can't exist ...Trumper
September 11, 2017
September
09
Sep
11
11
2017
08:16 PM
8
08
16
PM
PDT
EricMH,
... all physical processes can be described by a Turing machine.
Is this true? This seems like an extremely strong claim. Do you have a source which explains this in detail?daveS
September 11, 2017
September
09
Sep
11
11
2017
07:35 PM
7
07
35
PM
PDT
EricMH, have you ever herd the term,'psudo-scientific jargon'? You do realise that biological scientists refer to 'information', purely as the possible phenotypes produced by DNA. They don't actually misuse the term as grossly as ID has always done, since the golden age of Dembski's posts. Scientists also talk of 'design' in nature, without in any way conflating their meaning, with IDs intentional designer tinkering. It appears that before you publish this gold, that you understand what the science community will make of it. The closest word I can think of, that scientists would use to decribe your landmark work would be, 'gobbledigook'.rvb8
September 11, 2017
September
09
Sep
11
11
2017
07:34 PM
7
07
34
PM
PDT
ET @32, Heh:) I, 'lash out'? Good one:) My 'buddies', BTW are not my 'buddies', they are merely distinguished peer reviewed scientists, of world renown. I enjoy their comfortable, easy style in explaining complex topics such as, 'non-coding DNA', and finding transitional fossils, atavisims, and poor design in nature etc. They are talented entertaining writers of the calibre of Dawkins, Coyne, and Neil Shubin etc. All world class, respected theorists and experimenters. If you choose to call them my, 'buddies', I am flattered, thank you.rvb8
September 11, 2017
September
09
Sep
11
11
2017
07:24 PM
7
07
24
PM
PDT
Here's a proof that undirected processes cannot create mutual information between a creature and some organ. e = eye, c = chimp, h = human If chimps have eyes, then there is mutual information between eyes and chimps, I(e;c) > 0. On the other hand, if chimps do not have eyes, then the mutual information is zero, I(e;c) = 0. f(.) is a function representing some process that transforms random variable X into random variable Y, f(X) = Y. f is undirected towards eyes, so knowing about eyes tells us nothing about what f produces, Premise: H(Y|X) = H(Y|X,e). From this we can show that f will not increase the mutual information between creatures and eyes. Substituting chimps for X and humans for Y, we want to find out how f(.) impacts the evolution of eyes as we progress from chimps to humans. The mutual information between eyes, and chimps and humans can be expanded in two ways, 1. I(e;c,h) = I(e;c) + I(e;h|c) 2. I(e;c,h) = I(e;h) + I(e;c|h) From the Premise, I(e;h|c) = H(h|c) - H(h|c,e) = 0. Thus, we know, I(e;c) = I(e;h) + I(e;c|h), and consequently, I(e;c) >= I(e;h). So, an undirected process, such as Darwinian evolution (DE), cannot increase mutual information between creatures and organs. This means DE cannot create eyes. At best, it can persist eyes from creature to creature, and at worst it will eliminate them. DE cannot explain the origin of any organs, or any other sort of thing a creature may have, unless DE is directed towards that thing, in which case DE becomes teleological, and thus ceases to be DE. But, this proof is more than just a refutation of DE. It shows that no undirected process, which all materialistic processes are, can create mutual information. It also establishes what a telic law must look like. Whatever these telic laws are, they must have things like eyes, brains, fingers, etc. built into them from the beginning.EricMH
September 11, 2017
September
09
Sep
11
11
2017
07:14 PM
7
07
14
PM
PDT
@Origenes, materialism means everything is reducible to physical laws. Physical laws cannot produce information. So, if these telic laws can produce information, they are necessarily non-physical. Another way to say this, there is no physical process that provides a halting oracle, as all physical processes can be described by a Turing machine. However, something that creates information must have a halting oracle. Therefore, whatever creates information must be an oracle machine, and consequently non-physical. These purely physical, telic aliens must not only be from another planet, but they must be from another realm where the laws of reality are fundamentally different from our own.EricMH
September 11, 2017
September
09
Sep
11
11
2017
06:18 PM
6
06
18
PM
PDT
rvb8- No one cares what you say as you have never supported any bit of your trope. You think that if you can say it that is evidence. You and your "illustrious company" are just a bunch of bully wannabe's who couldn't support their claims if their lives depended on it so you have to lash out at people who call you on your BS. You and yours are the intellectual cowards of the universe. Congratulations.ET
September 11, 2017
September
09
Sep
11
11
2017
06:41 AM
6
06
41
AM
PDT
ID is compatible with aliens as the intelligent designers of earthly life.
I agree with Origenes. Biological ID can only be about the appearance of life on earth. It certainly can’t be about detecting design in the appearance of some unknown life form in some unknown part of the cosmos. The problem for ID critics is that the biological evidence of ID is entirely consistent with both theism and atheism, just as ID proponents have acknowledged for years on end. This places the social assault on physical evidence into perspective. Design critics don’t attack ID because it actually forces them to loosen their grip on atheism; they have no intention of that. They do it because science cannot be allowed to be consistent with theism, much less support it. Having had to give up on evidence, the remaining veneer of science is everything to them. Meanwhile, if ID proponents want to hold feet to the fire, they should remain scrupulous themselves about what ID evidence actually supports. It matters.Upright BiPed
September 10, 2017
September
09
Sep
10
10
2017
05:46 PM
5
05
46
PM
PDT
EricMH @28 Maybe on the alien designers' planet Thomas Nagel's natural telic laws are in operation. We do not have those here, as far as we know, so information theory and ID haven't taken them into account. Let's keep an open mind.Origenes
September 10, 2017
September
09
Sep
10
10
2017
04:00 PM
4
04
00
PM
PDT
kairos @24, and ET @27 believes that by continual insults and ad hominems, he can suggest ID is plausible. ET, I don't mind being a moron, (as measured by you at least) ,as I am in some very illustrious company, you on the other hand, have some questionable intellectual mates.rvb8
September 10, 2017
September
09
Sep
10
10
2017
03:38 PM
3
03
38
PM
PDT
1 2 3

Leave a Reply