Uncommon Descent Serving The Intelligent Design Community

Dembski on design detection in under three minutes

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From David Klinghoffer at Evolution News & Views:

We last checked in with Robert Lawrence Kuhn of the PBS series Closer to Truth as he interviewed physicist and Nobel laureate Brian Josephson who said he was “80 percent” sure of intelligent design. (BOOM.)

These aren’t brand new interviews by Kuhn, but still very interesting – and concise. Now, submitted for your Labor Day enjoyment, here’s one, pointed out by a Facebook friend, with mathematician William Dembski. Dr. Dembski briefly defines the method of detecting intelligent design. It is, he says, a form of triangulation on the effects of intelligence, namely contingency, complexity, and specification. The last of those refers to the question of “Does it conform to some sort of independently given pattern?”

Kuhn, not an ID proponent as far as I know, shrewdly notes that ID doesn’t seek to “prove” God, of the Bible or any other gods, but it is consistent with what you’d expect from a Deity. I find that distinction to be stubbornly lost on many ID critics. More.

Of course it’s lost on them! Dealing with the realities of design in nature would be conceptual work. By contrast, anyone can run around shrieking through a loud-hailer and stirring up a base. And they can often get government to fund it.

Who knows, one of these days, the jig may be up.

See also: Data basic

Comments
EMH, codes are functionally specific and the binary distribution forces a sharp peak of possibilities near 50-50. The first locks out the vast majority of possibilities for relevantly long strings as to be meaningful/ functional in complex contexts, 500+ bits. The latter means there is not much else to go to than the peak. Consequently input strings picked at any reasonable random [not just flat even] will will overwhelmingly be gibberish and will cause a machine to fail, they will not be algorithmically functional. The outputs can be all over the map but are forced to mostly come from the same zone. Yes you might just get some longer output strings depending on the architecture of the processor and how it behaves on crashing -- say, stuck on spewing noise in a do forever loop. And producing long strings of 1's and halting is almost utterly irrelevant, e.g. a functional code based on length of strings of 1's is maximally inefficient and so secondarily uncommunicative. Going to the peak and defining a code on the diversity there is far more effective as the world of computer and telecomms tech demonstrates. But, the pool being pulled from is overwhelmingly gibberish. We must not conflate what happens with say a 4-/ 8- bit string that can code a hex number or an alphanumeric character with messages of reasonable complexity. You can make all 16 hex codes or 256 8-bit strings meaningful, but when one chains, syntax and semantics fail real fast. That's why random document excercises have maxed out at 19 - 24 ASCII characters, far short of the 73 for just 500 bits, or the 143 for 1,000. KFkairosfocus
September 23, 2017
September
09
Sep
23
23
2017
10:42 PM
10
10
42
PM
PDT
@KF very interesting, I did not realize it was known that most bitstrings will not generate significantly longer outputs. I've been trying to figure this out for a bit. Would you know a formal proof of this fact by any chance? Or you consider it obvious since most 50/50 bitstrings are gibberish? The counter that sticks in my mind is that the busy beaver number for very short bitstrings is enormous. Short bitstrings are more probable than long bitstrings, so one could still argue it is likely to find a very expandable bitstring. Perhaps your point is the word you use, "message", implying that even if expandable bitstrings are easy to find, the expansion is still not something of value.EricMH
September 18, 2017
September
09
Sep
18
18
2017
02:00 AM
2
02
00
AM
PDT
EMH, recall, the overwhelming majority of "words" for any realistic code will be constrained to come from the near-50-50 1/0 peak of the binary distribution. This will be overwhelmed by gibberish for any circumstance of significant complexity. Turing Machine input strings that trigger outputs that express significant additional messages will be maximally hard to find at random [here, I am not specifying a uniform distribution, just a reasonable one such as a flicker noise or pink noise pattern etc, say from a Zener noise source] or by blind mechanical necessity on the gamut of cosmos level atomic resources. The point is, search resources challenge in the face of exponential runaway expansion of configuration spaces further multiplied by the sharp peak in the binomial distribution all work together to make finding sufficiently complex, functionally specific strings by blind chance and/or mechanical necessity, maximally implausible. To the point where, should this appear to be happening, one would be well advised to look deeper to see the trick at work or the means by which intelligently directed configuration was inadvertently brought to bear. KFkairosfocus
September 18, 2017
September
09
Sep
18
18
2017
12:23 AM
12
12
23
AM
PDT
EricMH, Ok, thanks, that clears it up. I was confused about how these compressible bitstrings were selected, but now I understand. Edit: And renders my #83 moot.daveS
September 17, 2017
September
09
Sep
17
17
2017
04:15 PM
4
04
15
PM
PDT
@DS, per your first point, yes. By a pigeonhole argument, we can see that very few bitstrings can be compressed significantly. Therefore, by picking a bitstring according to a uniform distribution, we are most likely to pick an incompressible bitstring. This is the insight behind ID concepts like algorithmic probability, complex specified information, algorithmic specified complexity, and KF's FSCO/I. I call this picking a bitstring based on the output of a Turing machine. However, we can select a bitstring based on the input to a Turing machine, by running bitstrings through a TM until one generates an enormous output. This output, since it is produced by a very small input, is highly compressible. Unlike the first scenario of selecting the output to a TM, by selecting the input it becomes very easy, in a probabilistic sense, to find bitstrings that are highly compressible, since even quite small bitstrings have enormous busy beaver numbers. So, if we are restricted to a uniform distribution for our chance hypothesis over outputs of TMs, then inference to design works pretty well when we find a highly compressible bitstring, since they are so improbable. However, if we use a uniform distribution over inputs to a TM, then the mere fact a bitstring is highly compressible is inadequate to infer design. This scenario does not invalidate the detection mechanics of CSI, ASC, FSCO/I, since we didn't start with a uniform distribution over outputs. But, it does mean we cannot go from compressible -> intelligently designed. An example of this happening in nature is with crystals, which are highly ordered, and consequently compressible, but do not indicate design. It also means that, in theory, we could end up with a lot of regularity in nature without intelligent intervention. But, per KF's point, this thought experiment assumes nature has some kind of Turing machine, which seems implausible. Wolfram's work with cellular automata is an attempt to give nature a TM, by finding a very small cellular automata that is Turing complete, and then assuming a cellular automata structure is a plausible start to the universe. Of course all of this gives up a lot of ground to the naturalist in the first place, since the mere fact there is something rather than nothing is best explained by a self explaining thing, which only God can be.EricMH
September 17, 2017
September
09
Sep
17
17
2017
03:57 PM
3
03
57
PM
PDT
KF,
DS, it is to a certain extent tangential to the focal issue, yes. I am pointing out that if you black box it as an abstract processor and then feed it, its detection or generation of FSCO/I-rich strings would not be a case of mere blind chance and mechanical necessity at work. Think of a toddler speaking a novel sentence, that too is not blind, it is an intelligent process.
To the extent I understand EricMH's thought experiment, I would draw an analogy to something like SETI. Suppose I carefully design and build a radio telescope together with a device to convert pulses in the electromagnetic field to bitstrings. I then point this telescope at some irregular variable star, and find that the pulses (interpreted using ascii encoding) spell out the owners manual for an Abu Garcia Ambassadeur C4 reel. I think we both would conclude with as close to absolute certainty as possible, that the fluctiations in the star's brightness were "designed" in some sense, regardless of the fact that the detector was also designed.daveS
September 17, 2017
September
09
Sep
17
17
2017
07:09 AM
7
07
09
AM
PDT
DS, it is to a certain extent tangential to the focal issue, yes. I am pointing out that if you black box it as an abstract processor and then feed it, its detection or generation of FSCO/I-rich strings would not be a case of mere blind chance and mechanical necessity at work. Think of a toddler speaking a novel sentence, that too is not blind, it is an intelligent process. I note, too, that abstract possibilities are not to be conflated with what is credible or reasonable. In principle a tray of 500 coins tossed could by chance come up with the first 73 characters of this comment, but that is so search challenged that if that SEEMED to be happening, we would be warranted to look for a trick. KFkairosfocus
September 16, 2017
September
09
Sep
16
16
2017
08:09 AM
8
08
09
AM
PDT
KF,
EMH, all, by design [the programming a Turing Machine executes is designed, as is the machine itself], which changes everything through impact of intelligence, knowledge, skill and purpose.
Pardon my interjection, but it seems to me that while the particular function is clearly designed deliberately, this thought experiment tests whether functional structures (bitstrings, onstensibly not designed) and hence FSCO/I could arise in nature.daveS
September 16, 2017
September
09
Sep
16
16
2017
07:20 AM
7
07
20
AM
PDT
EricMH, A couple questions about #78, if you don't mind:
1. Finding compressible bitstrings is extremely difficult using a uniform distribution.
Does this mean that if you draw a bitstring (via the uniform distribution), it is very difficult to ascertain whether it is compressible? And does the first sentence in (2.) mean that running these bitstrings through a TM gives you a mechanical way to determine whether these bitstrings are compressible? That is, it allows you to quickly select candidates for "compressedness"? **** I also have one question about how this relates to the original scenario, where the TM halts if output is longer than the input. If this happens, does that mean the original bitstring was "compressed" and the output bitstring is an uncompressed version of it?daveS
September 16, 2017
September
09
Sep
16
16
2017
07:06 AM
7
07
06
AM
PDT
EMH, all, by design [the programming a Turing Machine executes is designed, as is the machine itself], which changes everything through impact of intelligence, knowledge, skill and purpose. My point -- and the general argument -- starts from a context where design is not on the table and the general issue of enough resources to mount an effective search of a relevant config space becomes material. Think here, a thin soup of chemicals in a lightning-struck small pond or a comet core or an undersea vent or the like. Yes, 500 bits for the sol system is not a lot of bits but it already implies a space of 3.27*10^150 possibilities. The Sol system under relevant circumstances can sample about 10^110 or so states, not effectively different from a no search, EXCEPT when guided by active information that comes from intelligence. KFkairosfocus
September 16, 2017
September
09
Sep
16
16
2017
06:35 AM
6
06
35
AM
PDT
@KF I certainly appreciate your responses as you have thought about this to great extent. I'll lay out the bigger point I'm trying to get at, which will require a bit of redundancy on my part. 1. Finding compressible bitstrings is extremely difficult using a uniform distribution. In this scenario, the existence of highly compressible bitstrings easily indicates non-chance origin and the inference design is straightforward. 2. Finding compressible bitstrings is much easier when feeding them through a Turing machine. This is the insight behind Solomonoff induction, which states the best predictor for a bitstring is its elegant program. Such a scenario muddies inference to design as the CSI always equals zero when formulated as I(X)-K(X), since I(X) = K(X) in this case. Given that the physical universe is much more like scenario 2 than 1, the case for design is diminished. 500 bits is not a lot, but it may be enough for an ultra expandable bitstring, since the busy beaver number even for small numbers of bits quickly becomes larger than anything in our universe: https://en.wikipedia.org/wiki/Busy_beaver#Exact_values_and_lower_bounds So, it is not necessarily improbable that within a computational setting we may achieve the highly compressible bitstrings required by CSI with little probabilistic resources.EricMH
September 15, 2017
September
09
Sep
15
15
2017
09:26 PM
9
09
26
PM
PDT
EMH, we are looking at a challenge on utter inadequacy of resources to mount a search significantly different from no search, which is quite robust against issues on uniformity of distributions of possibilities. Where, tightness of configuration required to function imposes an islands of function pattern, as I discussed by talking about shaking up reel parts in buckets -- and the issue of scattering the parts across the Sol system. Further to this, a search in effect picks a subset from the population of possible configurations. This implies that search for a golden search is a higher order search from the power set of the configuration space of size N possibilities, 2^N. To expect finding a golden search is tantamount to the set-up has been fine tuned for success, raising all sorts of issues that point straight to design as best explanation, e.g. notions that protein-enzyme-d/rna life was written into the laws of the cosmos. Where, design of course is there at the outset on pondering a Turing machine. KF PS: I am in the midst of monitoring a fast developing political crisis here, which compounds other issues already on the table.kairosfocus
September 15, 2017
September
09
Sep
15
15
2017
06:01 PM
6
06
01
PM
PDT
KF, My mistake---I forgot about the 'computational resources' part that sets the threshold.daveS
September 15, 2017
September
09
Sep
15
15
2017
05:29 PM
5
05
29
PM
PDT
@KF this sort of analysis works well assuming a uniform distribution over configurations, and only a small percentage are functional. The point of the TM example is to show a small source that gives a non-uniform distribution over configurations. Being small, it could feasibly come from a uniform source with small probabilistic resources, and then in turn generate a highly non-uniform distribution that could give rise to fishing reels.EricMH
September 15, 2017
September
09
Sep
15
15
2017
05:18 PM
5
05
18
PM
PDT
DS, recall the point of FSCO/I is to set a bar to such a level in a setting (effectively the observed cosmos) that false positives are utterly unlikely, accepting false negatives. That said, the information content of such a reel is so high that it would easily surpass any reasonable threshold. Megabits, here -- start with gears, screws, plates etc then move up to specification on orientation, assembly and coupling. 2^1000000 ~ 9.9 *10^301029 . . . a mind-bogglingly large number of possibilities. KFkairosfocus
September 15, 2017
September
09
Sep
15
15
2017
04:20 PM
4
04
20
PM
PDT
PS: For example, is it true that the Abu Garcia Ambassadeur C4 reel could be concluded to have FSCO/I in any universe, of any physical size (provided it's large enough to contain the reel itself)?daveS
September 15, 2017
September
09
Sep
15
15
2017
02:43 PM
2
02
43
PM
PDT
KF, Does this mean that whether a particular structure has FSCO/I depends on the number of atoms in the universe? I thought that question could be decided "locally".daveS
September 15, 2017
September
09
Sep
15
15
2017
02:30 PM
2
02
30
PM
PDT
DS, 500 and 1000 bits were set on our observed sol system and cosmos. Relative to 10^57 or 10^80 atoms and 10^17 s at 10^12 to 14 rxns/s per atom, the scope of search is such that 2^500 or 2^1000 are comfortably beyond reasonable search. Were the sol sys or observed cosmos much bigger, the thresholds would be different, they are not pulled out of a magician's hat. As it is, we pretty much know sol sys scope and that of the observable cosmos. So, those numbers are relevant, not some hypothetical. I just pointed the linear vs exponential growth issue to underscore that search challenge readily outgrows scope of search resources. KFkairosfocus
September 15, 2017
September
09
Sep
15
15
2017
02:18 PM
2
02
18
PM
PDT
KF, Yes, the number of possible configurations would increase as (at least) an exponential function of the number of atoms, I presume. I guess I would conclude also that the chance of breaking through the 500-bit (or 1000-bit) threshold would increase as the number of atoms increases.daveS
September 15, 2017
September
09
Sep
15
15
2017
02:11 PM
2
02
11
PM
PDT
DS, numbers, countable numbers. Try number of atoms -- thus no of possible Chem rxn time states in 10^17 s -- and contrast how as bit string length goes up, space of configurations goes up as 2^n. KFkairosfocus
September 15, 2017
September
09
Sep
15
15
2017
02:00 PM
2
02
00
PM
PDT
@ET, if nature can produce humans, then the answer to your question is yes. But, the response is: nature cannot create a relatively simple thing such as a car or plastic, so why expect it to create the much more complex thing (humans) that is necessary for the creation of cars and plastic?EricMH
September 15, 2017
September
09
Sep
15
15
2017
12:41 PM
12
12
41
PM
PDT
Does the chance of a car popping into existence increase as you consider arbitrarily large and old universes, or does it remain either zero or extremely small, no matter the size and age?
It remains at zero because throwing time around doesn't solve anythingET
September 15, 2017
September
09
Sep
15
15
2017
11:14 AM
11
11
14
AM
PDT
KF, In #63, what does n represent?daveS
September 15, 2017
September
09
Sep
15
15
2017
10:32 AM
10
10
32
AM
PDT
ET, I don't know, which is why I'm asking the question in #62. Does the chance of a car popping into existence increase as you consider arbitrarily large and old universes, or does it remain either zero or extremely small, no matter the size and age? I can't answer that.daveS
September 15, 2017
September
09
Sep
15
15
2017
10:30 AM
10
10
30
AM
PDT
daves- Do you think that nature can produce an automobile given enough time? I say there isn't any chance of that happening. Heck nature can't even produce plasticET
September 15, 2017
September
09
Sep
15
15
2017
09:48 AM
9
09
48
AM
PDT
DS, scope of universe is linear [x n], scope of possibilities in bit string length is exponential [x 2^n]. The difference will be in at what threshold there will be a circumstance where it is utterly implausible for blind search to hit on functionally specific, coherent organisation and associated information. Already, 500 bits is a threshold for a 10^57 atom solar system, our effective universe for chemical level interactions. 1000 bits is a much more generous threshold for an observed cosmos of 10^80 atoms. x2 on number of bits, x20 on orders of magnitude of numbers of atoms. Time would scale linearly too, but a much older cosmos would manifest far more white dwarfs than we see and would have a very different pattern with star clusters breaking away from the main sequence, i.e. there is not a free variable in time. KFkairosfocus
September 15, 2017
September
09
Sep
15
15
2017
09:46 AM
9
09
46
AM
PDT
PS: What I'm really interested in the above post is whether you believe that even if the universe were vastly larger and older, then FSCO/I would still be extremely unlikely to arise through naturalistic means. For example, one might hold that regardless of how (finitely) large/old the universe is, the chance would be less than 0.000001% (substitute your favorite small positive number here).daveS
September 15, 2017
September
09
Sep
15
15
2017
08:14 AM
8
08
14
AM
PDT
KF, Is it true then, if resources were vastly greater than they are believed to be (but still finite), then generation of FSCO/I by natural processes would be more likely? For example, if the universe were sufficiently larger and older, then the chances could be > 99%, say?daveS
September 15, 2017
September
09
Sep
15
15
2017
07:29 AM
7
07
29
AM
PDT
PS: We are in effect turning the 10^57 or 10^80 atoms available into effectors and observers running through the config space of 500 or 1000 bits 10^12 - 10^14 times per second (think, trays of coins or use paramagnetic substances if you wish to seem more "physical"), in a massive ensemble. The result is, by dozens of orders of magnitude, unable to scan or sample more than a negligible fraction of the possibilities in 10^17 s or so. In short, massive haystack, needles necessarily exceedingly sparse, grossly inadequate resources to more than sample a negligibly small fraction. Rounding down to no effective search.kairosfocus
September 15, 2017
September
09
Sep
15
15
2017
07:24 AM
7
07
24
AM
PDT
EMH, I am highlighting that this is a real-world, finite, constrained context. Speculating on actual physical infinities may be entertaining but utterly lacks relevance. I also doubt that an actual physical, countable infinity of say atoms is possible, on Hilbert hotel type consequences. KFkairosfocus
September 15, 2017
September
09
Sep
15
15
2017
06:54 AM
6
06
54
AM
PDT
1 2 3

Leave a Reply