Uncommon Descent Serving The Intelligent Design Community

ID Predictions: Foundational principles underlying the predictions proposed by Jonathan M. and others.

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

PART I: BASIC PREMISES

Many predictions of ID flow from two underlying hypotheses, both of which are open to scientific investigation and refutation. If you miss these, however, other ID predictions may not make sense, since many arise from them in an important way. It is my belief that much of the puzzlement regarding ID predictions results from not being familiar with these two often unspoken premises.

I consider the first of these to be a basic hypothesis of ID, which is so obvious to ID researchers that they often forget to make explicit mention of it. It is,

1. Creating* integrated, highly functional machines is a difficult task.

This statement seems obvious to many engineers and others who construct complex systems for a living. As an informal statement, it is fairly straightforward. Yet as stated, it is not mathematically precise, since I have not defined “integrated, highly functional machines” nor “difficult task.” ID researchers have long conducted research to define precisely what separates an integrated, highly functional system from one that is not, attempting to define these terms in a quantifiable way. It is a worthwhile research project and their efforts should be applauded, even if these same efforts have met with mixed success. I won’t argue here whether or not they have succeeded in their task, since most of us would agree that if anything represents an integrated, highly functional machine then surely biological forms do. We can also agree that simple things such as rocks and crystals do not. Where exactly the cutoff point lies on that continuum is open to investigation, but hopefully it is reasonable to agree that humans, nematodes, bacteria and butterflies belong to the group labeled “integrated and highly functional machines.”

Now, this hypothesis is contingent, not necessary, since it might have been the case that life (functional machines) could arise easily. Imagine a world where frogs form from mud, unassisted, and where cells coalesce from simple mixtures of amino acids and lipids. We are unaware of anything that logically prevents the laws of nature from being such that they would produce integrated, functional machines quickly, without intelligent guidance. Creating life could have been a simple task. We can imagine laws of nature that allow for life forms to quickly and abundantly self-organize before our eyes, in minutes, and can imagine every planet in the solar system being filled with an abundance of life forms, much like sci-fi novels of decades past envisioned, arising from natural laws acting on matter.

Yet, the universe is not like this. Creating life forms (or writing computer programs, for that matter) is a difficult task. The ratio of functional configurations to non-functional is miniscule. Before the advent of molecular biology, it was believed that life was simple in its constitution, cells being seen as homogeneous blobs of jelly-like protoplasm. If we believe that life forms are simple, then it becomes plausible that a series of random accidents could have stumbled upon life.

We have since learned that life forms are not simple, and creating them (or repairing them) is no simple task. Therefore, where unguided materialistic theories might have received great confirmation, they have instead run afoul of reality.

This basic hypothesis, confirmed by empirical science in the 20th century, underlies much ID thought. I cannot think of a single ID theorist who would disagree with it. If creating life forms were a simple enough task, it would be reasonable to expect unintelligent mechanisms to produce them given cosmic timescales. Conversely, the more difficult this task, then the less plausible unguided, mechanistic speculations on the origin of life become. The difficulty of the task is precisely what places it beyond the reach of unintelligent mechanisms, and leaves intelligent mechanism as the only remaining possibility, since “intelligent” and “unintelligent” are mutually exclusive and exhaustive, much like “red” and “not red” encompass all possibilities of color.

The second hypothesis, much like the first, is so obvious that it often fails to be mentioned explicitly. It is,

2. Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales.

Who would argue with such a statement? Even ardent materialists, who view all intelligent agents as mechanical devices, are forced to admit that some configurations of matter can do things that other configurations cannot, such as write novels and create spacecraft, at least when we limit the time and probabilistic resources involved. If this were not the case, then why pay humans to perform certain tasks? Why not simply let nature run its course and perform the same work? Intelligent agents are at very least catalysts, allowing some tasks to be performed much more rapidly than is possible in their absence.

If we hold the second of these premises, namely that intelligent agents can accomplish some difficult tasks that unintelligent mechanisms cannot, and also hold that creating complex machines is a difficult task, then it follows that “creating life” may just be one of the tasks demonstrating a difference in causal powers between intelligent and unintelligent mechanisms. Notice the word “may”; the ID community would still need to demonstrate that intelligent agents, such as humans or Turing-test capable AI, can in fact construct life forms (or machines of comparable complexity and function), and would also need to demonstrate that unintelligent mechanisms are incapable of performing such tasks, even on cosmic timescales. This is where both positive ID work and necessary anti-evolution work arise, as they are required components of such an investigation.

ID theorists place the task of creating integrated, functionally complex machines in the group of tasks that are within the reach of intelligent agents yet outside the reach of unintelligent mechanisms, on the timescale of earth history. We can call this the third basic premise of ID, as currently modeled. It can be restated as,

3. Unintelligent causes are incapable of creating machines with certain levels of integration and function given 4.5 billion years, but intelligent causes are capable.

While we may dispute the truth of this statement, we cannot argue that it isn’t at least hinted at by the other two basic ID hypotheses. Given the first two premises, it naturally presents itself as a conjecture to be investigated.

 

PART II: ID PREDICTIONS

In light of these three premises, Jonathan M’s list of ID predictions appears much less ad hoc. Why should we expect functional protein folds to be rare in configuration space? Because if they were abundant, the task of creating novel proteins would be much easier, and by extension, so would be the higher-level task of creating functional molecular machines. Given the first premise, one expects at least some steps of the design process to present difficulties. True, this may not be the actual step that presents the difficulties, but given what we know about the curse of dimensionality, it is the natural place to begin investigation. And in light of what we now know concerning protein configuration space and the rarity of functional folds, such direction is not misleading.

In a similar vein, the prediction of “delicate optimisation and fine-tuning with respect to many features associated with biological systems” follows from the first basic premise of the model. Assume the contrary, for the sake of contradiction. If most configurations of matter and parameter settings result in integrated, highly functional self-replicating machines, then the problem of finding such configurations would cease to be difficult, by definition. Therefore, there must be a degree of specificity involved in life, such that the vast majority of configurations are incapable of functioning as life forms. ID’s basic premises require that the amount must, at minimum, be enough to place the task of finding such living configurations outside of the reach of unintelligent mechanisms given the probabilistic resources provided by a planetary or cosmic timescale. Therefore, a baseline level of fine-tuning is expected, and given the resources provided by a cosmic timescale, this baseline is predicted to be high. Once more, it could have been the case that most combinations of parameters and states would produce living organisms, making the problem of creating life easier, but this state of affairs would have helped falsify a widely-held basic premise of ID.

Evidence for these ID predictions would help confirm the basic ID hypotheses, and evidence to the contrary would weaken or falsify them. Therefore, it would seem fair to categorize them as predictions based on an ID framework. Without knowledge of the basic premises of ID as currently modeled, it is easy to see how confusion can arise when discussing what conclusions follow or do not follow from ID. If we don’t know the premises, how can we know what follows from them? It is my hope that this explicit spelling-out of foundational ID principles will aid in the discussion.

 

PART III: POSITIVE PREDICTIONS AND COMPOSITE MODELS

Lastly, I would be remiss not to mention the argument from analogy to human design, and how it relates to what is presented here. According to the argument, even if both intelligent and unintelligent mechanisms were capable of producing the effect in question (functional machines), intelligent agency might still serve as the best explanation, based on the similarities between engineered systems and living systems. If we’re fair, we are forced to acknowledge this line of argumentation is possible. Positive ID work could result from such an approach, since knowledge of how intelligent agents design things may shed light on how nature functions, or what to expect in terms of biological system construction. (See, for example, Casey Luskin’s discussion in “A Positive, Testable Case for Intelligent Design” where he describes how knowledge of human designed systems suggests predictions for biological systems.)

The strongest case for ID includes both types of evidence: positive evidence in favor of design, such as similarities to engineered systems and the use of design patterns within biological information systems, as well as evidence of the causal insufficiency of unintelligent mechanisms. If the third basic premise were falsified and intelligent agency was only one of multiple viable explanations, much more positive evidence would be required to make ID the most likely explanation, since we know that nature was operating when life formed, supplying the opportunity. Although ID could survive the falsification of the third basic premise, the case for ID would be severely weakened as a result, and the underlying model would be forced to change significantly, thereby modifying what is predicted by the model.

Some predictions would remain valid, such as those built on positive similarity to human design processes, but many predictions would not, including several presented by Jonathan M. ID as the sole viable hypothesis for the origin of integrated, functional machines differs from ID as one of multiple viable hypotheses, but positive evidence for design is certainly compatible with both models. The ID community currently holds to a model that both includes positive evidence for design and affirms all three basic ID premises outlined above. Therefore, both sets of predictions follow from the model: predictions based on the positive knowledge of human design activity and predictions implied by the causal insufficiency of unintelligent mechanisms.

 


 

* Note: The original post used the word “building” instead of “creating”, which caused some confusion among readers, since it was mistakenly taken to mean “the step-by-step assembly of machines” rather than the intended “design and creation of machines.” I use “create” in the sense of engineering, meaning to design and construct, to select from a realm of possible configurations. Thus, we say that engineers create new machine designs and software engineers create new software systems. The assembly process itself may be easy, but this is different than the task of discovering or creating the assembly instructions, gathering the required components, and setting any sensitive parameter values.

Comments
Mung, You wrote
Software design used to be a top-down practice, but more and more it’s being done bottom up.
Even though a large portion of software development is concerned with stringing together pre-existing software libraries, not all is. For example, a lot of research in CS and AI requires novel algorithms that must be designed, since they do not exist. Sometimes the problem is amenable to a divide-and-conquer or dynamic programming approach, but often it is not.
Modern practices could even be seen as building small components and connecting them until the larger system emerges.
Which is still an act of creation, at least on the meta-level. It is like creating a structure from lego blocks versus creating one from iron ore. In one case you additionally build the sub-components, but both still require a plan of integration, unless the structure is sufficiently simple.
There is no step-by-step recipe specified to be followed to get from the start to the finish.
This is true in some cases, but the structures that arise from such an emergent process are more like randomized networks of parts with homogeneous interface connections (think the WWW), than like hierarchically arranged machines. I've yet to see an example of bottom-up, emergent creation that results in anything like an integrated, highly functional machine, where each part is distinct and plays a specific role in operation, and where distinct functional modules are built into higher level functional systems, as biological life forms are. Machines of that type require coordination to design, since one part affects others in highly-constrained ways. I'm open to seeing a counter-example. As soon as sub-components are distinct and perform different roles, you have an additional combinatorial problem, since each way of arranging / connecting them is different, and results in different function or non-function. This requires information or large probabilistic resources, both of which are quantifiable. In the case of homogeneous sub-components, the ordering of sub-components is irrelevant. This makes design problems of that type simpler to solve, but it is a different problem than the one we're addressing here. Life forms are not simply large collections of homogeneous units, since proteins and cell types differ, and they are hierarchically arranged into tissue types that differ, organs that differ and systems that differ. Furthermore, the way you connect the various organs and systems together definitely affects their function.no-man
April 30, 2011
April
04
Apr
30
30
2011
11:18 AM
11
11
18
AM
PDT
no-man, I am an ID supporter. I'd love to be able to point people to something that would have logic that they would find compelling. Trust me, it's easier for me to nit-pick than it is for you to come up with something like you have and I'd like you to know that I am aware of this. But nit-picking is precisely what the critics are going to do, lol. Fact of life. Better we do it to ourselves first, imo.
When I used the word build, I actually meant something closer to create or design, as it is used by computer scientists who “build” software systems by creating them from scratch. I can see how that can be confusing.
Software design used to be a top-down practice, but more and more it's being done bottom up. Either way, the goal is to decompose the problem into smaller and smaller chunks to the point where the problem you're trying to solve becomes simple. Modern practices could even be seen as building small components and connecting them until the larger system emerges. There is no step-by-step recipe specified to be followed to get from the start to the finish.Mung
April 30, 2011
April
04
Apr
30
30
2011
10:51 AM
10
10
51
AM
PDT
Mung, Thank you for your comment, as it makes clear where our miscommunication is coming from. In my post I refer to "building" life forms, which you took to mean the construction process of putting together the components of a life form, given a recipe and materials, much like "making a pizza." When I used the word build, I actually meant something closer to create or design, as it is used by computer scientists who "build" software systems by creating them from scratch. I can see how that can be confusing. All my points defend the premise that creating life forms is a difficult mathematical problem, and all yours defend the premise that given a recipe to follow and materials, perhaps the mechanical construction process is not difficult. These two points are not contradictory. Is this a fair assessment? Either way, I will edit my opening post to use the word create in place of build, as I think it makes my intended meaning clearer.no-man
April 30, 2011
April
04
Apr
30
30
2011
08:24 AM
8
08
24
AM
PDT
I'm reading a book on complexity and I came across the following quote:
Making a pizza is complicated, but not complex. The same holds for filling out your tax return, or mending a bicycle puncture. Just follow the instructions step by step, and you will eventually be able to go from start to finish without too much trouble.
Building a machine may be complicated, but does that make it complex? Is life just following step-by-step instructions? As soon as you start comparing life to machines this is the question you come up against. You're trying to come up with foundational principles for ID. Complicated is not complex. Life is not mechanics. Simple is as simple does. For some people math is simple, for me, it's complicated. I'm still trying to figure out what your questions have to do with your OP, but I'm thinking on it.Mung
April 30, 2011
April
04
Apr
30
30
2011
08:04 AM
8
08
04
AM
PDT
Mung wrote
You might as well argue that having digital pictures displayed on a screen is “easy” because you simply have to press a single button on the computer to make it happen.
That is exactly what I would argue. Just as I would argue it’s easy for me to get into my truck and drive to work, something I do on almost a daily basis.
Just because somebody else already solved a difficult problem (i.e. how to get TFT LCD matrix to display patterns corresponding to digitally encoded color and luminance pixel information) does not make the underlying problem less difficult mathematically; it simply makes it easier for you, since you don't have to solve it, as you're given the "answer" to one particular instance. The underlying problem difficulty, from a mathematical perspective, is unchanged. Having one solution to the Traveling Salesman Problem doesn't make TSP a less difficult mathematical problem; it doesn't suddenly lose its NP-Completeness. I'm talking about problem difficulty in general here. Just because someone or something solved the problem of building integrated, highly functional biological machinery (using a solution that itself requires integrated, highly functional machinery in the form of a cell) does not make the underlying mathematical problem of building self-replicating, autonomous robots any easier. It just means that someone has done the hard work, not that there was no work to be done. Do you now see the difference between what you're saying and what I'm talking about? If you still disagree, I'd like you to answer a few questions. Let's assume, for the sake of contradiction, that the problem of building life forms is easy and life forms can be built from whatever materials are available, in many situations, by the blind interaction of natural forces on matter. Then we must ask: 1. How often do we expect new life forms to emerge from non-life? (This doesn't include reproduction from the solutions (life forms) we already have; I'm talking about abiogenesis.) 2. Why don't we see new life spontaneously forming around us, such as in sterile environments? 3. Why is carbon-based life the only type of life on earth? If building life is easy, surely other combinations of compounds could work. We can imagine robotic life that uses electric circuitry and silicon as a basis. Is natural selection somehow not operating on ceramic crystals and other compounds, that replicate with occasional errors? 4. Why is our solar system so barren of life? If robust self-replicating machines are so easy to stumble upon, why has nature so miserably failed on all other planets in our solar system? Why aren't they teeming with life and the universe teeming with these "easy-to-build" machines? If you were correct and building life was a trivial task, we would expect life to be found anywhere unintelligent forces were acting on matter over long periods of time. Instead we find barren rock. All the facts would seem to argue against the position you presented.no-man
April 30, 2011
April
04
Apr
30
30
2011
07:06 AM
7
07
06
AM
PDT
You might as well argue that having digital pictures displayed on a screen is “easy” because you simply have to press a single button on the computer to make it happen.
That is exactly what I would argue. Just as I would argue it's easy for me to get into my truck and drive to work, something I do on almost a daily basis.
Second, the mathematics disagree with you. If you take a single functional protein of average length, you can see the combinatorial possibilities are enormous. Not all combinations produce function. In fact, research suggests functional sequences are quite rare.
Think of the combinatorial possibilities of live ever existing on this particular planet. Did God have a hard time finding the earth when He was searching for a place suitable for life?
I, however, have no desire to enter theological discussions on this thread.
Theological has nothing to do with it.
Building integrated, highly functional machines is a difficult task.
For whom? For people who find building integrated highly functional machines is a difficult task? It must be an immensely difficult task for a blind man to find his way to the grocery store, wouldn't you think?Mung
April 29, 2011
April
04
Apr
29
29
2011
04:49 PM
4
04
49
PM
PDT
Heinrich wrote
I was arguing this, or at least that they were simple enough that they’re not that difficult to build. They can’t be that difficult to build – lots of them are built every day, week, month and year without any intelligent agent involved.
First, you're committing an error in logic by assuming something is "easy" simply because it happens often. The question then becomes, what enables biology to repeatedly accomplish such a difficult task autonomously? The answer, of course, is the sophisticated information storage and processing machinery contained in every cell, guiding reproduction and development. The task is accomplished due to the vast quantities of digital and spatial information stored in a cell that directs the developmental outcome. You might as well argue that having digital pictures displayed on a screen is "easy" because you simply have to press a single button on the computer to make it happen. Second, the mathematics disagree with you. If you take a single functional protein of average length, you can see the combinatorial possibilities are enormous. Not all combinations produce function. In fact, research suggests functional sequences are quite rare. As we move up along the organismal hierarchy, and look at cells, tissues, organs, systems, and body plans, we find that each of is an organized configuration of the objects from the level below. So the organization and detail of a life form is staggering. Building life-sized self-replicating, autonomous robots is a hard task, yet biology has solved this problem with a level of sophistication that dwarfs our own. If building life forms is an easy task, why does life only come from other life in our experience? Why don't silicon based electronic life forms spontaneously emerge around us? Why is carbon-based life the only type we see? Why aren't there new origins of life (coming from non-life) every day? No, we only see life being produced by sophisticated machinery — other life forms. There is a reason for this. You also wrote
I also don’t think they’re that difficult to evolve, given time, and I haven’t seen any convincing arguments that show that selection isn’t up to the job.
Again, it is good that you disagree on this point, which makes it an objective statement that separates ID from an unguided evolutionary perspective. Since both sides disagree on this point, it is useful for generating predictions that flow from one model and not from the other.no-man
April 29, 2011
April
04
Apr
29
29
2011
07:44 AM
7
07
44
AM
PDT
Mung asks,
1. Building integrated, highly functional machines is a difficult task. How do you get from this to “life forms are highly complicated machines”?
I don't. Here, "complicated" serves as loose shorthand for "integrated, highly functional," since integrated and highly functional machines (of sufficient functional capability and integration) happen to be complicated. I'm happy to not discuss "complicated machinery", and instead focus on the more salient points of function and integration; however, Heinrich brought up "simple" vs "complex", so I was trying to address the question using his/her own terminology. You also ask:
And does this mean that if God was the creator, that creating life was difficult for Him?
Why bring up discussion of god at this point? Different people have different understandings of god, so your answer would depend on your conception of god. I, however, have no desire to enter theological discussions on this thread.no-man
April 29, 2011
April
04
Apr
29
29
2011
07:20 AM
7
07
20
AM
PDT
1. Building integrated, highly functional machines is a difficult task.
How do you get from this to "life forms are highly complicated machines"?
Based on this fact, the first premise states that building integrated, functional machines (such as life forms) is a difficult task, due to their non-simplicity.
And does this mean that if God was the creator, that creating life was difficult for Him?Mung
April 29, 2011
April
04
Apr
29
29
2011
07:06 AM
7
07
06
AM
PDT
Are you arguing that life forms are simple (which we know they are not – at least combinatorially speaking),
I was arguing this, or at least that they were simple enough that they're not that difficult to build. They can't be that difficult to build - lots of them are built every day, week, month and year without any intelligent agent involved. I also don't think they're that difficult to evolve, given time, and I haven't seen any convincing arguments that show that selection isn't up to the job.Heinrich
April 29, 2011
April
04
Apr
29
29
2011
12:37 AM
12
12
37
AM
PDT
Heinrich, I am afraid I don't understand what you're arguing in the first paragraph. Are you arguing that life forms are simple (which we know they are not - at least combinatorially speaking), or are you arguing with the premise itself (premise 3), that unintelligent mechanisms are incapable of producing life forms? If the first, I agree that life forms don't have to be complicated by logical necessity (as far as I can tell)...they just happen to be. We know this empirically, so it isn't really an assumption on my part. Based on this fact, the first premise states that building integrated, functional machines (such as life forms) is a difficult task, due to their non-simplicity. The ID community is open to being shown otherwise. If you are arguing the second point, then that is fine. This is a point that clearly separates the ID model from other models of origins, so I would expect it to be a point of contention. Yet because it is a point of contention, it allows ID to make predictions that other origins models do not. Hence the my post.no-man
April 28, 2011
April
04
Apr
28
28
2011
06:13 PM
6
06
13
PM
PDT
If intelligent causes were only capable of producing the types of artifacts that unintelligent causes were, such as simple things like paperweights, then neither type of mechanism would be able to explain the origin of life forms.
Unless life forms are sufficiently simple enough that both intelligent and unintelligent processes can create it. Is there any a priori reason to expect this? Doesn't this also means that ID is being forced to make the assumption that some things are complicated?Heinrich
April 28, 2011
April
04
Apr
28
28
2011
12:27 PM
12
12
27
PM
PDT
Heinrich asks (concerning the second basic premise, "Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales."):
Is this necessary for there to be intelligent design? Is there any reason why designers didn’t just design and make simple stuff? I’m curious to know what makes this assumption so central.
If intelligent causes were only capable of producing the types of artifacts that unintelligent causes were, such as simple things like paperweights, then neither type of mechanism would be able to explain the origin of life forms. What makes things interesting is the ID claim that although unintelligent mechanisms cannot produce integrated, highly functional machinery, intelligent causes can. So just because intelligent agents can produce simpler effects if they desired to, doesn't mean that unintelligent mechanisms can produce the more functional and integrated machinery we find in biology. There is an asymmetry in the capabilities of the two types of mechanisms, at least according to the current model of ID. But asymmetry doesn't necessarily imply an empty intersection of the two sets.no-man
April 27, 2011
April
04
Apr
27
27
2011
03:23 PM
3
03
23
PM
PDT
PS: Please note, too, the work by Dr Torley and that by Dr Giem which set the context for my own log reduction exercise.kairosfocus
April 27, 2011
April
04
Apr
27
27
2011
02:26 PM
2
02
26
PM
PDT
NM: The basic Chi metric, reduced form, sets up an easy way to look at various thresholds of complexity. I have highlighted the solar system and the observed cosmos. The modification to explicitly include the judgement of specificity per our notorious semiotic, judging observer, is the same sort of dummy variable as is sometimes put into econometric results to account for circumstances of a war or the like as opposed to more normal times. It makes explicit the issue of judging specificity, with complexity being evaluated on passing a threshold. While it is a commonplace that items with FSCI beyond 500 bits of complexity of known cause are uniformly known to be designed [they are after all beyond the solar system threshold], as witness say the body of text on the Internet, we actually have some results on infinite monkey random text generation experiments and tests over the past decade or so. Wiki reports:
One computer program run by Dan Oliver of Scottsdale, Arizona, according to an article in The New Yorker, came up with a result on August 4, 2004: After the group had worked for 42,162,500,000 billion billion monkey-years, one of the "monkeys" typed, “VALENTINE. Cease toIdor:eFLP0FRjWK78aXzVOwm)-‘;8.t" The first 19 letters of this sequence can be found in "The Two Gentlemen of Verona". Other teams have reproduced 18 characters from "Timon of Athens", 17 from "Troilus and Cressida", and 16 from "Richard II".[20] A website entitled The Monkey Shakespeare Simulator, launched on July 1, 2003, contained a Java applet that simulates a large population of monkeys typing randomly, with the stated intention of seeing how long it takes the virtual monkeys to produce a complete Shakespearean play from beginning to end. For example, it produced this partial line from Henry IV, Part 2, reporting that it took "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters: RUMOUR. Open your ears; 9r"5j5&?OWTY Z0d...
From the citations, I suspect the comparative data sets were collections of Google books [or at least plays from these books] or the like. Such results of up to 24 ASCII characters through random text generation are consistent with the decades old analysis of Borel [IIRC] who highlighted 1 in 10^50 as in effect an observability threshold. 128^24 = 3.74 * 10^50. A space of about 170 bits worth of configs, is searcheable within our scope of resources. Further tests are possible, much as you suggest. And across time they should be done as well. However, the underlying analysis on the thresholds as offered by Dembski [cf. UPB and his take on Seth Lloyd's work] and by Abel, as well as others going back decades, will also be relevant. We have both analytical and empirical reason to infer that functionally specific information rich things beyond either 500 bits or 1,000 bits, will be comfortably beyond the reach of blind chance and mechanical necessity. GEM of TKIkairosfocus
April 27, 2011
April
04
Apr
27
27
2011
02:20 PM
2
02
20
PM
PDT
2. Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales.
Is this necessary for there to be intelligent design? Is there any reason why designers didn't just design and make simple stuff? I'm curious to know what makes this assumption so central.Heinrich
April 27, 2011
April
04
Apr
27
27
2011
12:55 PM
12
12
55
PM
PDT
kairosfocus in 5, I like the metrics you have been developing and refining over the course of time, even if the growing number of them can be a little overwhelming to an outsider. I would, however, like to see experimental results showing how well the log-reduced Chi metric, or any other metric, work in actually distinguishing designed from non-designed items. If I have an idea for a good binary classifier, I begin by going over the math and logic underlying why it should work. Once convinced that the features I am considering are capable of separating items of type A from items of type B, I then gather a dataset (or several) consisting of both types of items, and apply my classifier to measure its accuracy and precision. I don't see why this couldn't be done with the metric you propose. You could generate a dataset consisting of blog posts or wikipedia entries, and generate an equal sized number of random entries, both by using simple single-letter sampling as well as bigram and trigram approximations of English text (following Shannon's work). For the specification delta (0,1), you may have a volunteer read each entry and judge if it makes sense (is functional), or not. I think the result would be an interesting test of your ideas.no-man
April 27, 2011
April
04
Apr
27
27
2011
07:16 AM
7
07
16
AM
PDT
TM: The point of the CSI and IC concepts is to objectively empirically test, and as a result, ground the claim that biology is designed. As opposed to arguing in a circle on an assumption. That's why MG et al worked so hard to try to kick sand up in our eyes on their validity. And, the strength of the inferences is why in the end she and her circle failed. Turns out, CSI is sound and Schneider's ev -- which she plainly is championing -- is not. (The thread documents in details, I am not just spreading ill-founded but persuasive talking points.) Notice how MG has now quietly tip-toed away rather than address issues with her arguments, the log reduced form of the Chi CSI metric, and the questions on ev. Took some doing to get to that point, but that we are now there is highly significant. GEM of TKIkairosfocus
April 27, 2011
April
04
Apr
27
27
2011
02:56 AM
2
02
56
AM
PDT
Mung I think you are absolutely right. I think ID will accomplish more simply by viewing biology as designed and working off that premise than things like CSI and IC. Also, I am no man. (Witch King gets pwned.)tragic mishap
April 26, 2011
April
04
Apr
26
26
2011
06:21 PM
6
06
21
PM
PDT
Hi Joseph:
How many nucleotides can necessity and chance string together? That is given a test tube, flask or vat of nucleotides, plus some UV, heat, cold, lightning, etc., what can come of that? Has anyone tried to do such a thing?
I'm sure they have. I'd be very surprised if Stephen Meyer doesn't cover this topic in Signature in the Cell. c.f. http://onlinelibrary.wiley.com/doi/10.1002/anie.197204511/abstract Essential Cell BiologyMung
April 26, 2011
April
04
Apr
26
26
2011
04:45 PM
4
04
45
PM
PDT
F/N: Applying a modified Chi-metric: I nominate a modded, log-reduced Chi metric for plausible thresholds of inferring sufficient complexity AND specificity for inferring to design as best explanation on a relevant gamut:
(a) Chi'_500 = Ip*S - 500, bits beyond the solar system threshold (b) Chi'_1000 = Ip*S - 1,000, bits beyond the observed cosmos threshold
. . . where Ip is a measure of explicitly or implicitly stored information in the entity and S is a dummy variable taking 1/0 according as [functional] specificity is plausibly inferred on relevant data. [This blends in the trick used in the simplistic, brute force X-metric mentioned in the just linked.] 500 and 1,000 bits are swamping thresholds for solar system and cosmological scales. For the latter, we are looking at the number of Planck time quantum states of the observed cosmos being 1 in 10^150 of the implied config space of 1,000 bits. For a solar system with ours as a yardstick, 10^102 Q-states would be an upper limit, and 10^150 or so possibilities for 500 bits would swamp it by 48 orders of magnitude. (Remember, the fastest chemical interactions take about 10^30 Planck time states and organic reactions tend to be much, much slower than that.) So, the reduced Dembski metric can be further modified to incorporate the judgement of specificity, and non-specificity would lock out being able to surpass the threshold of complex specificity. I submit that a code-based function beyond 1,000 bits, where codes are reasonably specific, would classify. Protein functional fold-ability constraints would classify on the sort of evidence often seen. Functionality based on Wicken wiring diagram organised parts that would be vulnerable to perturbation would also qualify, once the description list of nodes, arcs and interfaces would exceed the the relevant thresholds. [In short, I am here alluding to how we reduce and represent a circuit or system drawing or process logic flowchart in a set of suitably structured strings.] So, some quantification is perhaps not so far away as might at first be thought. Your thoughts? GEM of TKIkairosfocus
April 26, 2011
April
04
Apr
26
26
2011
04:10 PM
4
04
10
PM
PDT
NM: Good to see another exercise in open notebook science here at UD, joining several recent threads. A good trend. I would suggest a slight modification to your second principle:
2. Intelligent agents have causal powers that unintelligent causes do not, at least on short enough timescales [and scopes].
That is, I am underscoring the infinite monkeys point. An infinity of resources would -- if it were achievable -- swamp all search scope challenges. But, we are not looking at an infinite scope, hence the significance of the various calculations on the needle in the haystack challenge. In turn, this leads to a modification of your third point:
3. Unintelligent causes are incapable of building machines with certain levels of integration and function given 4.5 billion years [a gamut of ~ 10^80 atoms, and ~ 10^25 s or the like plausible universal bound on temporal and material resources], but intelligent causes are capable.
I note that the recent work of Venter and others shows that it is plausible that life as we observe it could be created in a molecular nanotech lab some generations of tech beyond where we currently are. I'd say Venter's start-up bacterium is a basic demonstration of feasibility. GEM of TKIkairosfocus
April 26, 2011
April
04
Apr
26
26
2011
03:44 PM
3
03
44
PM
PDT
3. Unintelligent causes are incapable of building machines with certain levels of integration and function given 4.5 billion years, but intelligent causes are capable.
to wit- another $64,000 question: All this talk about information- specified and not- has me searching for an answer to the question: How many nucleotides can necessity and chance string together? That is given a test tube, flask or vat of nucleotides, plus some UV, heat, cold, lightning, etc., what can come of that? Has anyone tried to do such a thing? After Lincoln and Joyce published the paper on the sustained replication- Self-Sustained Replication of an RNA Enzyme, there was an article in Scientific American with one of them (Joyce?)- seems to think that 35 is highly unlikely. And for another $64,000: How many amino acids can necessity and chance string together?Joseph
April 26, 2011
April
04
Apr
26
26
2011
02:41 PM
2
02
41
PM
PDT
OT:
The Springer journal Genetic Programming and Evolvable Machines is celebrating its first 10 years with a special anniversary issue of articles reviewing the state of GP and considering some of its possible futures. For the month of July (which ends in two days!) the entire issue is available for free download.
http://www.springerlink.com/content/h46r77k291rn/Mung
April 26, 2011
April
04
Apr
26
26
2011
02:36 PM
2
02
36
PM
PDT
I have long thought that Intelligent Design Theorists ought to spend far more time than they currently do (or at any rate seem to) in looking at design as it currently exists and is practiced. Could you imagine if we found parallels in nature? For example, in software engineering you have something known as design patterns. What if design patterns were discovered in nature? The more rational nature appears to be the more it appears to be the result of a rational mind.Mung
April 26, 2011
April
04
Apr
26
26
2011
02:28 PM
2
02
28
PM
PDT
1 2

Leave a Reply