Uncommon Descent Serving The Intelligent Design Community

Dembski on design detection in under three minutes

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

From David Klinghoffer at Evolution News & Views:

We last checked in with Robert Lawrence Kuhn of the PBS series Closer to Truth as he interviewed physicist and Nobel laureate Brian Josephson who said he was “80 percent” sure of intelligent design. (BOOM.)

These aren’t brand new interviews by Kuhn, but still very interesting – and concise. Now, submitted for your Labor Day enjoyment, here’s one, pointed out by a Facebook friend, with mathematician William Dembski. Dr. Dembski briefly defines the method of detecting intelligent design. It is, he says, a form of triangulation on the effects of intelligence, namely contingency, complexity, and specification. The last of those refers to the question of “Does it conform to some sort of independently given pattern?”

Kuhn, not an ID proponent as far as I know, shrewdly notes that ID doesn’t seek to “prove” God, of the Bible or any other gods, but it is consistent with what you’d expect from a Deity. I find that distinction to be stubbornly lost on many ID critics. More.

Of course it’s lost on them! Dealing with the realities of design in nature would be conceptual work. By contrast, anyone can run around shrieking through a loud-hailer and stirring up a base. And they can often get government to fund it.

Who knows, one of these days, the jig may be up.

See also: Data basic

Comments
EMH, codes are functionally specific and the binary distribution forces a sharp peak of possibilities near 50-50. The first locks out the vast majority of possibilities for relevantly long strings as to be meaningful/ functional in complex contexts, 500+ bits. The latter means there is not much else to go to than the peak. Consequently input strings picked at any reasonable random [not just flat even] will will overwhelmingly be gibberish and will cause a machine to fail, they will not be algorithmically functional. The outputs can be all over the map but are forced to mostly come from the same zone. Yes you might just get some longer output strings depending on the architecture of the processor and how it behaves on crashing -- say, stuck on spewing noise in a do forever loop. And producing long strings of 1's and halting is almost utterly irrelevant, e.g. a functional code based on length of strings of 1's is maximally inefficient and so secondarily uncommunicative. Going to the peak and defining a code on the diversity there is far more effective as the world of computer and telecomms tech demonstrates. But, the pool being pulled from is overwhelmingly gibberish. We must not conflate what happens with say a 4-/ 8- bit string that can code a hex number or an alphanumeric character with messages of reasonable complexity. You can make all 16 hex codes or 256 8-bit strings meaningful, but when one chains, syntax and semantics fail real fast. That's why random document excercises have maxed out at 19 - 24 ASCII characters, far short of the 73 for just 500 bits, or the 143 for 1,000. KF kairosfocus
@KF very interesting, I did not realize it was known that most bitstrings will not generate significantly longer outputs. I've been trying to figure this out for a bit. Would you know a formal proof of this fact by any chance? Or you consider it obvious since most 50/50 bitstrings are gibberish? The counter that sticks in my mind is that the busy beaver number for very short bitstrings is enormous. Short bitstrings are more probable than long bitstrings, so one could still argue it is likely to find a very expandable bitstring. Perhaps your point is the word you use, "message", implying that even if expandable bitstrings are easy to find, the expansion is still not something of value. EricMH
EMH, recall, the overwhelming majority of "words" for any realistic code will be constrained to come from the near-50-50 1/0 peak of the binary distribution. This will be overwhelmed by gibberish for any circumstance of significant complexity. Turing Machine input strings that trigger outputs that express significant additional messages will be maximally hard to find at random [here, I am not specifying a uniform distribution, just a reasonable one such as a flicker noise or pink noise pattern etc, say from a Zener noise source] or by blind mechanical necessity on the gamut of cosmos level atomic resources. The point is, search resources challenge in the face of exponential runaway expansion of configuration spaces further multiplied by the sharp peak in the binomial distribution all work together to make finding sufficiently complex, functionally specific strings by blind chance and/or mechanical necessity, maximally implausible. To the point where, should this appear to be happening, one would be well advised to look deeper to see the trick at work or the means by which intelligently directed configuration was inadvertently brought to bear. KF kairosfocus
EricMH, Ok, thanks, that clears it up. I was confused about how these compressible bitstrings were selected, but now I understand. Edit: And renders my #83 moot. daveS
@DS, per your first point, yes. By a pigeonhole argument, we can see that very few bitstrings can be compressed significantly. Therefore, by picking a bitstring according to a uniform distribution, we are most likely to pick an incompressible bitstring. This is the insight behind ID concepts like algorithmic probability, complex specified information, algorithmic specified complexity, and KF's FSCO/I. I call this picking a bitstring based on the output of a Turing machine. However, we can select a bitstring based on the input to a Turing machine, by running bitstrings through a TM until one generates an enormous output. This output, since it is produced by a very small input, is highly compressible. Unlike the first scenario of selecting the output to a TM, by selecting the input it becomes very easy, in a probabilistic sense, to find bitstrings that are highly compressible, since even quite small bitstrings have enormous busy beaver numbers. So, if we are restricted to a uniform distribution for our chance hypothesis over outputs of TMs, then inference to design works pretty well when we find a highly compressible bitstring, since they are so improbable. However, if we use a uniform distribution over inputs to a TM, then the mere fact a bitstring is highly compressible is inadequate to infer design. This scenario does not invalidate the detection mechanics of CSI, ASC, FSCO/I, since we didn't start with a uniform distribution over outputs. But, it does mean we cannot go from compressible -> intelligently designed. An example of this happening in nature is with crystals, which are highly ordered, and consequently compressible, but do not indicate design. It also means that, in theory, we could end up with a lot of regularity in nature without intelligent intervention. But, per KF's point, this thought experiment assumes nature has some kind of Turing machine, which seems implausible. Wolfram's work with cellular automata is an attempt to give nature a TM, by finding a very small cellular automata that is Turing complete, and then assuming a cellular automata structure is a plausible start to the universe. Of course all of this gives up a lot of ground to the naturalist in the first place, since the mere fact there is something rather than nothing is best explained by a self explaining thing, which only God can be. EricMH
KF,
DS, it is to a certain extent tangential to the focal issue, yes. I am pointing out that if you black box it as an abstract processor and then feed it, its detection or generation of FSCO/I-rich strings would not be a case of mere blind chance and mechanical necessity at work. Think of a toddler speaking a novel sentence, that too is not blind, it is an intelligent process.
To the extent I understand EricMH's thought experiment, I would draw an analogy to something like SETI. Suppose I carefully design and build a radio telescope together with a device to convert pulses in the electromagnetic field to bitstrings. I then point this telescope at some irregular variable star, and find that the pulses (interpreted using ascii encoding) spell out the owners manual for an Abu Garcia Ambassadeur C4 reel. I think we both would conclude with as close to absolute certainty as possible, that the fluctiations in the star's brightness were "designed" in some sense, regardless of the fact that the detector was also designed. daveS
DS, it is to a certain extent tangential to the focal issue, yes. I am pointing out that if you black box it as an abstract processor and then feed it, its detection or generation of FSCO/I-rich strings would not be a case of mere blind chance and mechanical necessity at work. Think of a toddler speaking a novel sentence, that too is not blind, it is an intelligent process. I note, too, that abstract possibilities are not to be conflated with what is credible or reasonable. In principle a tray of 500 coins tossed could by chance come up with the first 73 characters of this comment, but that is so search challenged that if that SEEMED to be happening, we would be warranted to look for a trick. KF kairosfocus
KF,
EMH, all, by design [the programming a Turing Machine executes is designed, as is the machine itself], which changes everything through impact of intelligence, knowledge, skill and purpose.
Pardon my interjection, but it seems to me that while the particular function is clearly designed deliberately, this thought experiment tests whether functional structures (bitstrings, onstensibly not designed) and hence FSCO/I could arise in nature. daveS
EricMH, A couple questions about #78, if you don't mind:
1. Finding compressible bitstrings is extremely difficult using a uniform distribution.
Does this mean that if you draw a bitstring (via the uniform distribution), it is very difficult to ascertain whether it is compressible? And does the first sentence in (2.) mean that running these bitstrings through a TM gives you a mechanical way to determine whether these bitstrings are compressible? That is, it allows you to quickly select candidates for "compressedness"? **** I also have one question about how this relates to the original scenario, where the TM halts if output is longer than the input. If this happens, does that mean the original bitstring was "compressed" and the output bitstring is an uncompressed version of it? daveS
EMH, all, by design [the programming a Turing Machine executes is designed, as is the machine itself], which changes everything through impact of intelligence, knowledge, skill and purpose. My point -- and the general argument -- starts from a context where design is not on the table and the general issue of enough resources to mount an effective search of a relevant config space becomes material. Think here, a thin soup of chemicals in a lightning-struck small pond or a comet core or an undersea vent or the like. Yes, 500 bits for the sol system is not a lot of bits but it already implies a space of 3.27*10^150 possibilities. The Sol system under relevant circumstances can sample about 10^110 or so states, not effectively different from a no search, EXCEPT when guided by active information that comes from intelligence. KF kairosfocus
@KF I certainly appreciate your responses as you have thought about this to great extent. I'll lay out the bigger point I'm trying to get at, which will require a bit of redundancy on my part. 1. Finding compressible bitstrings is extremely difficult using a uniform distribution. In this scenario, the existence of highly compressible bitstrings easily indicates non-chance origin and the inference design is straightforward. 2. Finding compressible bitstrings is much easier when feeding them through a Turing machine. This is the insight behind Solomonoff induction, which states the best predictor for a bitstring is its elegant program. Such a scenario muddies inference to design as the CSI always equals zero when formulated as I(X)-K(X), since I(X) = K(X) in this case. Given that the physical universe is much more like scenario 2 than 1, the case for design is diminished. 500 bits is not a lot, but it may be enough for an ultra expandable bitstring, since the busy beaver number even for small numbers of bits quickly becomes larger than anything in our universe: https://en.wikipedia.org/wiki/Busy_beaver#Exact_values_and_lower_bounds So, it is not necessarily improbable that within a computational setting we may achieve the highly compressible bitstrings required by CSI with little probabilistic resources. EricMH
EMH, we are looking at a challenge on utter inadequacy of resources to mount a search significantly different from no search, which is quite robust against issues on uniformity of distributions of possibilities. Where, tightness of configuration required to function imposes an islands of function pattern, as I discussed by talking about shaking up reel parts in buckets -- and the issue of scattering the parts across the Sol system. Further to this, a search in effect picks a subset from the population of possible configurations. This implies that search for a golden search is a higher order search from the power set of the configuration space of size N possibilities, 2^N. To expect finding a golden search is tantamount to the set-up has been fine tuned for success, raising all sorts of issues that point straight to design as best explanation, e.g. notions that protein-enzyme-d/rna life was written into the laws of the cosmos. Where, design of course is there at the outset on pondering a Turing machine. KF PS: I am in the midst of monitoring a fast developing political crisis here, which compounds other issues already on the table. kairosfocus
KF, My mistake---I forgot about the 'computational resources' part that sets the threshold. daveS
@KF this sort of analysis works well assuming a uniform distribution over configurations, and only a small percentage are functional. The point of the TM example is to show a small source that gives a non-uniform distribution over configurations. Being small, it could feasibly come from a uniform source with small probabilistic resources, and then in turn generate a highly non-uniform distribution that could give rise to fishing reels. EricMH
DS, recall the point of FSCO/I is to set a bar to such a level in a setting (effectively the observed cosmos) that false positives are utterly unlikely, accepting false negatives. That said, the information content of such a reel is so high that it would easily surpass any reasonable threshold. Megabits, here -- start with gears, screws, plates etc then move up to specification on orientation, assembly and coupling. 2^1000000 ~ 9.9 *10^301029 . . . a mind-bogglingly large number of possibilities. KF kairosfocus
PS: For example, is it true that the Abu Garcia Ambassadeur C4 reel could be concluded to have FSCO/I in any universe, of any physical size (provided it's large enough to contain the reel itself)? daveS
KF, Does this mean that whether a particular structure has FSCO/I depends on the number of atoms in the universe? I thought that question could be decided "locally". daveS
DS, 500 and 1000 bits were set on our observed sol system and cosmos. Relative to 10^57 or 10^80 atoms and 10^17 s at 10^12 to 14 rxns/s per atom, the scope of search is such that 2^500 or 2^1000 are comfortably beyond reasonable search. Were the sol sys or observed cosmos much bigger, the thresholds would be different, they are not pulled out of a magician's hat. As it is, we pretty much know sol sys scope and that of the observable cosmos. So, those numbers are relevant, not some hypothetical. I just pointed the linear vs exponential growth issue to underscore that search challenge readily outgrows scope of search resources. KF kairosfocus
KF, Yes, the number of possible configurations would increase as (at least) an exponential function of the number of atoms, I presume. I guess I would conclude also that the chance of breaking through the 500-bit (or 1000-bit) threshold would increase as the number of atoms increases. daveS
DS, numbers, countable numbers. Try number of atoms -- thus no of possible Chem rxn time states in 10^17 s -- and contrast how as bit string length goes up, space of configurations goes up as 2^n. KF kairosfocus
@ET, if nature can produce humans, then the answer to your question is yes. But, the response is: nature cannot create a relatively simple thing such as a car or plastic, so why expect it to create the much more complex thing (humans) that is necessary for the creation of cars and plastic? EricMH
Does the chance of a car popping into existence increase as you consider arbitrarily large and old universes, or does it remain either zero or extremely small, no matter the size and age?
It remains at zero because throwing time around doesn't solve anything ET
KF, In #63, what does n represent? daveS
ET, I don't know, which is why I'm asking the question in #62. Does the chance of a car popping into existence increase as you consider arbitrarily large and old universes, or does it remain either zero or extremely small, no matter the size and age? I can't answer that. daveS
daves- Do you think that nature can produce an automobile given enough time? I say there isn't any chance of that happening. Heck nature can't even produce plastic ET
DS, scope of universe is linear [x n], scope of possibilities in bit string length is exponential [x 2^n]. The difference will be in at what threshold there will be a circumstance where it is utterly implausible for blind search to hit on functionally specific, coherent organisation and associated information. Already, 500 bits is a threshold for a 10^57 atom solar system, our effective universe for chemical level interactions. 1000 bits is a much more generous threshold for an observed cosmos of 10^80 atoms. x2 on number of bits, x20 on orders of magnitude of numbers of atoms. Time would scale linearly too, but a much older cosmos would manifest far more white dwarfs than we see and would have a very different pattern with star clusters breaking away from the main sequence, i.e. there is not a free variable in time. KF kairosfocus
PS: What I'm really interested in the above post is whether you believe that even if the universe were vastly larger and older, then FSCO/I would still be extremely unlikely to arise through naturalistic means. For example, one might hold that regardless of how (finitely) large/old the universe is, the chance would be less than 0.000001% (substitute your favorite small positive number here). daveS
KF, Is it true then, if resources were vastly greater than they are believed to be (but still finite), then generation of FSCO/I by natural processes would be more likely? For example, if the universe were sufficiently larger and older, then the chances could be > 99%, say? daveS
PS: We are in effect turning the 10^57 or 10^80 atoms available into effectors and observers running through the config space of 500 or 1000 bits 10^12 - 10^14 times per second (think, trays of coins or use paramagnetic substances if you wish to seem more "physical"), in a massive ensemble. The result is, by dozens of orders of magnitude, unable to scan or sample more than a negligible fraction of the possibilities in 10^17 s or so. In short, massive haystack, needles necessarily exceedingly sparse, grossly inadequate resources to more than sample a negligibly small fraction. Rounding down to no effective search. kairosfocus
EMH, I am highlighting that this is a real-world, finite, constrained context. Speculating on actual physical infinities may be entertaining but utterly lacks relevance. I also doubt that an actual physical, countable infinity of say atoms is possible, on Hilbert hotel type consequences. KF kairosfocus
@KF, that's very interesting. Would you mind explaining further why infinite resources cannot generate FSCO/I? Couldn't it be part of the cycle you mention? EricMH
EMH, kindly cf 53 above; resources relative to configuration space i/l/o the constraint of coherent functionality is a highly material factor. And no, an implicit quasi infinite world in which available resources are not a constreaint is not relevant. Indeed, the answers you seek have been there in stat mech for 100+ years: an ensemble of unlimited size and duration will circulate through every possible state; in a cycle. KF kairosfocus
@daveS, yes, Chaitin's constant is completely incalculable. It is the proportion of an infinite number of bitstrings and which ones halt. Halting cannot be determined in such a general scenario due to the halting problem. To your point, yes, if we know a priori the proportion of functional bitstrings then we do not need to worry about Chaitin's constant. So, in a biological scenario this may not be an issue. In a broader setting this may still be a problem, and then that leaves the background for the biological scenario still ill defined, with a potentially naturalistic source of FSCO/I. In which case, FSCO/I may be adequate to eliminate a particular proximate naturalistic theory, but insufficient to eliminate a global naturalistic theory. Which still leaves the door open to a proximate naturalistic cause, so FSCO/I is not sufficient to rule out methodological naturalism which there is great practical motivation to maintain due to its great success historically. EricMH
EricMH,
yes the functional bit strings halt. Part of the problem is that we cannot know the proportion that halt, which is an incalculable number known as Chaitin’s constant. Further, of those that do halt, it is unclear what proportion expand vs contract when run on a TM.
Interesting---I've read snippets about Chaitin's constant in the past, but I don't know enough about the field to recognize it. I take it one cannot even find useful bounds for these numbers? I wonder if this poses a problem for the thought experiment though? Suppose for the particular TM you chose, 90% (or even 0.9%) of bitstrings are functional. I suspect that an ID proponent might respond by saying that such a high percentage is completely unrealistic and could never occur in situations where FSCO/I is actually applied, for example in biology. daveS
@daveS, yes the functional bit strings halt. Part of the problem is that we cannot know the proportion that halt, which is an incalculable number known as Chaitin's constant. Further, of those that do halt, it is unclear what proportion expand vs contract when run on a TM. @KF, as I mention to daveS, an issue is that we cannot know the proportion that are functional, i.e. halting programs. EricMH
EMH, the problem of the scale of the cosmos is central to the design inference. Further, by the very nature of bit strings, the config space of possibilities is a sharply peaked binomial distribution centred on 50-50 1/0, with the overwhelmingly dominant group being gibberish. FSCO/I is necessarily sparsely distributed because of the highly specific configuration requirements to attain function. To see this, imagine a 6500 C3 reel disassembled in a bait bucket and shaken up. The number of non-functional configs will utterly overwhelm the functional ones. and, that's for a clumped case. Include scattering across the Sol system and you will see the point with even greater force. KF kairosfocus
EricMH, Thanks, that's helpful. Just to confirm that I understand the concept, the "functional" bit strings are those that cause the Turing machine to halt, is that correct? Another basic question---considering the collection of TM's you describe in #48, those that halt only if the output is sufficiently longer than the input---does the proportion of bitstrings that cause halting vary? That is, for some of the TM's, is it relatively easy to find a bitstring which causes halting, while for others, it's much harder? Basically I'm asking, assuming my understanding in the first paragraph is correct, whether function among bitstrings can be rare or common, depending on the TM. daveS
@KF, I see, so as long as the required Turing machine is too complex for the universe's probabilistic resources, then it is not a valid counter. However, if there were enough probabilistic resources to create the Turing machine, would that present a problem? One hypothetical way is in some sort of multiverse scenario, or by appealing to Turing complete cellular automata such as Rule 110, which is fairly small. Of course the surrounding machinery that makes the cellular automata work is probably not very small. @daveS, yes any Turing complete language will do. Let's say the language is Python, and I eval randomly assembled ASCII strings. Some of those strings are programs. Some of the programs halt. Some of the halting programs will output an ASCII string that is longer that the original program. We would also need to use a dovetailing procedure, since we do not know a priori which ASCII strings do halt, and do not want to get stuck. So, each time we add a new ASCII string to our pool, we eval each ASCII string in the pool one time step, unless the string has been eliminated already by one of our criteria. Eventually, this procedure is guaranteed to find the ASCII strings that halt with longer outputs. This whole process can be written as a fairly short Python program. EricMH
EricMH, I'd like to consider your thought experiment, but this is over my head. Could I rephrase it in terms of a computer program (which I'm slightly more familiar with), such as Perl or Common Lisp? That is, I assume we have created a program which takes as inputs finite-length bit strings, and outputs another bit string, and ends only if the output's length exceeds the input length by some prespecified amount. My sketchy understanding is that any Turing machine can be simulated by a program in either of these languages (and vice versa), so hopefully this reformulation causes no problems. daveS
EMH, "I create a Turing machine . . . " itself fatally undermines the claim. KF PS: If I were to search the config space for 500 bits, I would find EVERY meaningful 72 letter ASCII string. The problem is the search resources of the Sol system are grossly insufficient for such on reasonably available time. This is similar to the statistical reasoning behind the 2nd law of thermodynamics. kairosfocus
@KF, here's a counter argument that non-intelligent processes can create FSCO/I. I create a Turing machine such that it only halts if the length of the output is longer than the length of the input by a large enough amount. Then I feed it random bitstrings until it halts on one. This Turing machine is guaranteed to halt, since highly compressible bitstrings exist, and will happen upon one sooner or later. Since the output bitstring is highly compressible, it will have a simple description, despite being a long bitstring. Such a bitstring seems to have large amounts of FSCO/I, yet it is produced purely through chance and necessity. What's wrong with this thought experiment? EricMH
RVB8, 36: It is -- long since, sadly -- patent that you think rhetoric, accusation, word-twisting and strawman mischaracterisations can substitute for sober, sound discussion as meeting of serious minds:
You do realise that biological scientists refer to ‘information’, purely as the possible phenotypes produced by DNA. They don’t actually misuse the term as grossly as ID has always done, since the golden age of Dembski’s posts. Scientists also talk of ‘design’ in nature, without in any way conflating their meaning, with IDs intentional designer tinkering.
First, Shannon's metrics, strictly, are about information-carrying capacity. As his context was things like telephony, that is utterly unsurprising. If you would humble yourself enough to read Sect A my always linked through my name, you would find enough food for thought to correct the misperceptions beneath the accusations presented as fact as I just clipped. For instance, information proper has to do with coherent configurations that are meaningful and often functional in ways depending on an underlying organisational pattern. There is a world of difference between a pile of car parts and a working auto. Such functional organisation can be reduced to its information content by designing a description language that pivots on a sophisticated version of the old 20 questions, Animal, vegetable or mineral game. By imposing a structured pattern of Y/N questions, of sufficient length, essentially any entity can be specifically described. That's what AutoCAD etc do, e.g. the wiremesh description of a gear, then the exploded view description of say an ABU 6500 C3 reel; which I have used for years as an iconic case of understandable functionally specific complex organisation and associated information, FSCO/I for short. Of course, such a file is measured in binary digits, and function based on coherent organisation can be readily evaluated. In short, contrary to far too much selectively hyperskeptical stubbornly dismissive and denigratory rhetoric on the part of objectors to the design inference, FSCO/I, the functionally observable form of complex specified information, is real. Further, simply observing something like the functional organisation of a living cell and of the biochemical reaction network will immediately demonstrate to all but those in willful denial, its relevance to biology. I have long since pointed 0ut the comparison of a petroleum refinery's units, piping and instrument diagram with the reaction network of the living cell -- which is often posted as a wall-sized chart in Uni Biochem Depts. It is easy to show that beyond a reasonable threshold of 500 - 1,000 bits, no blind search on the gamut of the Sol system [~10^57 atoms, 10^17 s, 10^12 - 14 Chemical level rxns/atom/sec], or the observed cosmos [~ 10^80 atoms, 10^17 s] is credibly likely to succeed in finding the deeply isolated islands of functionally coherent configurations in the space of lumped or scattered configurations. That space of possibilities is 10^150 - 10^301, and the scope of feasible search is dozens of orders of magnitude below the thresholds. Nor is a search for a golden search that gets around that reasonable: searches are subsets, and the space of searches for a config space of scale n comes from its power set, of order 2^n. n starts at 10^150. So, it is utterly unsurprising that on trillions of directly observed cases the ONLY observed cause of FSCO/I is intelligently directed configuration, i.e. design. Where also, we may profitably observe an entity and its FSCO/I, reverse-engineering it to see how it comes to have functional coherence. That is, we may examine a design based on observing an entity closely. Design as observed as beyond the FSCO/I threshold then demands explanation as to credible best explanation. As a result, we may observe -- a level 2 observation -- that the only empirically and analytically plausible actually observed cause of FSCO/I is design. Here, a verb describing a two-phase process: describing the functional configuration in a description language amenable to construction, then actual construction under control of that FSCO/I. Further to this, designs are inherently purpose-driven, i.e. constrained by performance targets that guide functionally coherent contrivance. This leads to the inference to design as empirically warranted causal process on noting FSCO/I, by way of inference to best current explanation on tested, reliable sign. Onward discussion as to the observed cause of designs is a third order question: designs imply purpose and intent is a characteristic of agency. That is, designs point to designers but the design by itself may not allow us to infer identity beyond having adequate capability. So, on fair comment: the above imagined difficulties and projections of incompetence or worse are artifacts of stubborn resistance to cogent evidence and linked prudent rational, provisional -- inference to best current explanation -- inference. Which is the sort of inference Science can attain to. Especially, when seeking to reconstruct the inherently unobservable deep past of origins. We were not there, at best we can seek to provisionally reconstruct based on its evident traces. So, it is high time to drop the stubborn rhetoric of resistance, improper dismissal and denigration, allowing FSCO/I to speak for itself. KF kairosfocus
EricMH, Thanks for the link; that looks like a fascinating paper. If we're talking about the NP Hardness Assumption, then that does seem plausible to me. daveS
@daveS, computer scientist Scott Aaronson proposed the "NP Hardness Assumption" which states "NP-complete problems are intractable in the physical world." https://www.scottaaronson.com/papers/npcomplete.pdf EricMH
rvb8:
They don’t actually misuse the term as grossly as ID has always done,
More evidence-free trope ET
rvb8:
They are talented entertaining writers of the calibre of Dawkins, Coyne, and Neil Shubin etc. All world class, respected theorists and experimenters.
And yet not one of them can support the claims they make with respect to evolution. That means their "talent" is in their story-telling and not science. And that is why they are your buddies- they are also self-deluded story-tellers who couldn't support their claims scientifically if their lives depended on it. ET
OT https://www.sciencedaily.com/releases/2017/09/170911122628.htm es58
EricMH, The first question that comes to my mind is, how do you know you can always construct a computer using any such physical process? I guess it's obvious that if you could construct a physical computer capable of computing a non-Turing-computable function, then there would have to be physical processes which could not be "modeled" with Turing machine, but the converse is less clear to me. Has anyone written this up and published it? I see a few semi-related things on various versions of the Church-Turing Thesis, but nothing quite like the quote I posted above. daveS
@daveS, if some physical process could not be modeled by a Turing machine, then we could build computers using this physical process to compute the uncomputable. So far all such claims turn out to be snake oil. EricMH
@rvb8, the proof shows DE cannot increase mutual information between DNA and phenotypes, if DE is not directed towards said mutual information. EricMH
It is one thing to 'describe' a process in relation to a Turing machine.... quite another to explain it. Seems that the fearful ID opponents are grasping once again....and rational thought eludes them....wait...rational thought can't exist ... Trumper
EricMH,
... all physical processes can be described by a Turing machine.
Is this true? This seems like an extremely strong claim. Do you have a source which explains this in detail? daveS
EricMH, have you ever herd the term,'psudo-scientific jargon'? You do realise that biological scientists refer to 'information', purely as the possible phenotypes produced by DNA. They don't actually misuse the term as grossly as ID has always done, since the golden age of Dembski's posts. Scientists also talk of 'design' in nature, without in any way conflating their meaning, with IDs intentional designer tinkering. It appears that before you publish this gold, that you understand what the science community will make of it. The closest word I can think of, that scientists would use to decribe your landmark work would be, 'gobbledigook'. rvb8
ET @32, Heh:) I, 'lash out'? Good one:) My 'buddies', BTW are not my 'buddies', they are merely distinguished peer reviewed scientists, of world renown. I enjoy their comfortable, easy style in explaining complex topics such as, 'non-coding DNA', and finding transitional fossils, atavisims, and poor design in nature etc. They are talented entertaining writers of the calibre of Dawkins, Coyne, and Neil Shubin etc. All world class, respected theorists and experimenters. If you choose to call them my, 'buddies', I am flattered, thank you. rvb8
Here's a proof that undirected processes cannot create mutual information between a creature and some organ. e = eye, c = chimp, h = human If chimps have eyes, then there is mutual information between eyes and chimps, I(e;c) > 0. On the other hand, if chimps do not have eyes, then the mutual information is zero, I(e;c) = 0. f(.) is a function representing some process that transforms random variable X into random variable Y, f(X) = Y. f is undirected towards eyes, so knowing about eyes tells us nothing about what f produces, Premise: H(Y|X) = H(Y|X,e). From this we can show that f will not increase the mutual information between creatures and eyes. Substituting chimps for X and humans for Y, we want to find out how f(.) impacts the evolution of eyes as we progress from chimps to humans. The mutual information between eyes, and chimps and humans can be expanded in two ways, 1. I(e;c,h) = I(e;c) + I(e;h|c) 2. I(e;c,h) = I(e;h) + I(e;c|h) From the Premise, I(e;h|c) = H(h|c) - H(h|c,e) = 0. Thus, we know, I(e;c) = I(e;h) + I(e;c|h), and consequently, I(e;c) >= I(e;h). So, an undirected process, such as Darwinian evolution (DE), cannot increase mutual information between creatures and organs. This means DE cannot create eyes. At best, it can persist eyes from creature to creature, and at worst it will eliminate them. DE cannot explain the origin of any organs, or any other sort of thing a creature may have, unless DE is directed towards that thing, in which case DE becomes teleological, and thus ceases to be DE. But, this proof is more than just a refutation of DE. It shows that no undirected process, which all materialistic processes are, can create mutual information. It also establishes what a telic law must look like. Whatever these telic laws are, they must have things like eyes, brains, fingers, etc. built into them from the beginning. EricMH
@Origenes, materialism means everything is reducible to physical laws. Physical laws cannot produce information. So, if these telic laws can produce information, they are necessarily non-physical. Another way to say this, there is no physical process that provides a halting oracle, as all physical processes can be described by a Turing machine. However, something that creates information must have a halting oracle. Therefore, whatever creates information must be an oracle machine, and consequently non-physical. These purely physical, telic aliens must not only be from another planet, but they must be from another realm where the laws of reality are fundamentally different from our own. EricMH
rvb8- No one cares what you say as you have never supported any bit of your trope. You think that if you can say it that is evidence. You and your "illustrious company" are just a bunch of bully wannabe's who couldn't support their claims if their lives depended on it so you have to lash out at people who call you on your BS. You and yours are the intellectual cowards of the universe. Congratulations. ET
ID is compatible with aliens as the intelligent designers of earthly life.
I agree with Origenes. Biological ID can only be about the appearance of life on earth. It certainly can’t be about detecting design in the appearance of some unknown life form in some unknown part of the cosmos. The problem for ID critics is that the biological evidence of ID is entirely consistent with both theism and atheism, just as ID proponents have acknowledged for years on end. This places the social assault on physical evidence into perspective. Design critics don’t attack ID because it actually forces them to loosen their grip on atheism; they have no intention of that. They do it because science cannot be allowed to be consistent with theism, much less support it. Having had to give up on evidence, the remaining veneer of science is everything to them. Meanwhile, if ID proponents want to hold feet to the fire, they should remain scrupulous themselves about what ID evidence actually supports. It matters. Upright BiPed
EricMH @28 Maybe on the alien designers' planet Thomas Nagel's natural telic laws are in operation. We do not have those here, as far as we know, so information theory and ID haven't taken them into account. Let's keep an open mind. Origenes
kairos @24, and ET @27 believes that by continual insults and ad hominems, he can suggest ID is plausible. ET, I don't mind being a moron, (as measured by you at least) ,as I am in some very illustrious company, you on the other hand, have some questionable intellectual mates. rvb8
Aliens could have designed life on earth, but insofar as they are purely material, the information that comes from the aliens cannot have been created by the aliens. That is just the logical conclusion of the data processing inequality from information theory. Or, take the expectation of CSI and it will always be non-positive. EricMH
OOL study, and Evolution are studied by scientists as seperate fields, but both fields attack these subjects with rigour, deep curiosity, and passionate ferocity.
And after all of that you still have NOTHING! So either it didn't happen or you and yours are complete morons. ET
rvb8
yes, aliens are possible, but who designed the aliens?
Science proceeds one step at a time. But thank you for continuing to expose your scientific illiteracy.
Here’s a question for ET, who appears to want to go down punching, by continually denying the clear link betaween ID and Christianity.
Try to make that link or shut up already
Why is it that whenever a State, or school district, in the US wants to alter the Public School science curricula by introducing ID language, or, ‘Teach the Controversy’ (Heh:), language, it is always (not sometimes), suggested by Christian religionists?
Evidence please. And by your "logic" evolutionism is an atheistic thing, which would also fly in the face of the US Constitution. ET
RVB8-- Support SETI, Who has opposed it, at least on a voluntary or ad hoc level? Granted there would be opposition to create a highly funded bureaucracy, but this opposition wouldn't be philosophical but based on the quite reasonable suspicion that it is a corrupt tax scam. tribune7
RVB8, You seem to imagine that if you repeat a talking point in defiance of and disregard for truth long enough and often enough, it will become plausible -- at least to your intended audience. Most likely, on the theory that most people lack the imagination to see that big lies are possible and are too often to be seen. In short, you have . . . again . . . resorted to a notorious propaganda tactic, and this is duly noted. KF kairosfocus
rvb8 @22 Unresponsive. Again, ID is compatible with alien designers of unknown origin. Origenes
Origines @21, then start looking for them. Support SETI, who knows when they finally confess there involvement in design, we can then ask them about their own origins. rvb8
rvb8 @18
rvb8: yes, aliens are possible, but who designed the aliens?
Good question. ID does not have the answer. Scientific data concerning aliens is lacking and ID is, at this point, forced to remain neutral wrt their origin.
rvb8
Origenes: “ID should be allowed to remain neutral on the origin of these aliens.”
Why? It only strengthens the accusation that ID lacks curiosity.
Neutrality on the origin of aliens is not due to lack of curiosity, but due to lack of data. Origenes
boru @19, of course the identity of the Designer is not part of ID. That would be to admit supernaturalism, as the Designer would, needs must, be above and beyond His creation, the pure definition of a deity. By avoiding who the Designer is, and the more perplexing question, who designed the Designer, ID denies its religious antecedents. But boru, no one is fooled, least of all the desingenuous posters here. rvb8
The identity of the Designer is not part of the theory. boru
Origenes @16, yes, aliens are possible, but who designed the aliens? "ID should be allowed to remain neutral on the origin of these aliens." Why? It only strengthens the accusation that ID lacks curiosity. OOL study, and Evolution are studied by scientists as seperate fields, but both fields attack these subjects with rigour, deep curiosity, and passionate ferocity. And you come along and say, "well, we're not really interested in the origins of alien designers because that would lead to an infinate regression which would point to an ultimate designer, God. And we are desperate to keep God out of the design inferance because that would expose our religious motivations." Deceit, disingenuousness, duplicitousness, hypocracy, thy name is ID. rvb8
Here's a question for ET, who appears to want to go down punching, by continually denying the clear link betaween ID and Christianity. Why is it that whenever a State, or school district, in the US wants to alter the Public School science curricula by introducing ID language, or, 'Teach the Controversy' (Heh:), language, it is always (not sometimes), suggested by Christian religionists? ET, thoughts? rvb8
EricMH @14 ID is compatible with aliens as the intelligent designers of earthly life. ID should be allowed to remain neutral on the origin of these aliens. Origenes
EMH, there are those who will argue till the cows come home, that our brains are design-capable, and that they are produced by blind chance and mechanical necessity cumulatively acting over thousands of millions of years. Of course, they have no evidence of such having the capability to search relevant configuration spaces successfully, but that is their faith. Only, they don't realise that this is a huge faith-claim. KF kairosfocus
That being said, ID does require an entity that can create information, which nothing purely material can do. So, ID minimally implies an immaterial, yet causally effective plane of existence, i.e. more than just information and math since information and math cannot do anything. EricMH
*crickets* EricMH
Bob O'H- You don't know what you are talking about. "The Design Revolution", page 25, Dembski writes:
Intelligent Design has theological implications, but it is not a theological enterprise. Theology does not own intelligent design. Intelligent design is not a evangelical Christian thing, or a generally Christian thing or even a generally theistic thing. Anyone willing to set aside naturalistic prejudices and consider the possibility of evidence for intelligence in the natural world is a friend of intelligent design.
He goes on to say:
Intelligent design requires neither a meddling God nor a meddled world. For that matter, it doesn't even require there be a God.
Now what?
First, by any reasonable definition of the term, intelligent design is not "religion".- page 441 under the heading Not Religion- Signature in the Cell- Meyer
ET
ET @ 10 - I, at least, wouldn't call the author of Intelligent Design: The Bridge Between Science & Theology. a moron. Bob O'H
Does ID believe that if you write that ID and Christianity have no clear connection enough times
There isn't any connection between ID and Christianity. Only morons on an agenda try to make one. ET
Josephson’s a bit of a crank isn’t he?
Not when compared to materialists ET
Is there a difference between what a theory states and what its supporters state? Or are theory and supporters indistinguishable? EricMH
Bob O'H @2, wow, that is certainly a hell of a lot of 'woo' the man accepts. So, Kuhn who is not an ID proponentcist, suggests he can make the distinction between a 'Deity' for ID, which he finds acceptable, and not that ID is in any way, related to the Judaeo/Christian traditon. How many girations of the spine did that take him? All he has to do is come here to UD and observe how the KJV of the Christian Bible is used as source material, and a referance text, to disabuse him on that notion. Does ID believe that if you write that ID and Christianity have no clear connection enough times, then it will become real? rvb8
I can levitate birds but no one seems to care. - Steven Wright
Heartlander
> Am I a quackpot? most definitely! :D Mung
I can easily demonstrate psychokinesis by levitating my hand through only the power of my mind. Am I a quackpot? EricMH
Bob O'H @ 2: Read your wife's article. Found the following interesting: "Yet both men have ended up in the same place: they have abandoned rationality and the scientific method to advocate boneheaded fantasies." Using that standard, Lawrence Krauss and Richard Dawkins must own several of these quackpottery awards. Truth Will Set You Free
Josephson's a bit of a crank isn't he? Ah, yes. My wife wrote about him and as I recall he wasn't very happy. Bob O'H
This article from Evolution News -- https://evolutionnews.org/2017/09/researchers-highlight-logistics-nightmare-facing-chromosome-controls/ would seem to end the debate over whether or not there is design in nature -- in overwhelming favor of ID. And it's a part of a continuing steam of discoveries coming out of -- ready for this -- science properly reported. The Dawkins' and Coynes' of the pseudo-science world seem so stuck in the dark ages of the ivory towers of lifelong academia. DonJohnsonDD682

Leave a Reply