Uncommon Descent Serving The Intelligent Design Community

“Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information”

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here’s our newest paper: “Life’s Conservation Law: Why Darwinian Evolution Cannot Create Biological Information,” by William A. Dembski and Robert J. Marks II, forthcoming chapter in Bruce L. Gordon and William A. Dembski, eds., The Nature of Nature: Examining the Role of Naturalism in Science (Wilmington, Del.: ISI Books, 2009).

Click here for pdf of paper.

1 The Creation of Information
2 Biology’s Information Problem
3 The Darwinian Solution
4 Computational vs. Biological Evolution
5 Active Information
6 Three Conservation of Information Theorems
7 The Law of Conservation of Information
8 Applying LCI to Biology
9 Conclusion: “A Plan for Experimental Verification”

ABSTRACT: Laws of nature are universal in scope, hold with unfailing regularity, and receive support from a wide array of facts and observations. The Law of Conservation of Information (LCI) is such a law. LCI characterizes the information costs that searches incur in outperforming blind search. Searches that operate by Darwinian selection, for instance, often significantly outperform blind search. But when they do, it is because they exploit information supplied by a fitness function—information that is unavailable to blind search. Searches that have a greater probability of success than blind search do not just magically materialize. They form by some process. According to LCI, any such search-forming process must build into the search at least as much information as the search displays in raising the probability of success. More formally, LCI states that raising the probability of success of a search by a factor of q/p (> 1) incurs an information cost of at least log(q/p). LCI shows that information is a commodity that, like money, obeys strict accounting principles. This paper proves three conservation of information theorems: a function-theoretic, a measure-theoretic, and a fitness-theoretic version. These are representative of conservation of information theorems in general. Such theorems provide the theoretical underpinnings for the Law of Conservation of Information. Though not denying Darwinian evolution or even limiting its role in the history of life, the Law of Conservation of Information shows that Darwinian evolution is inherently teleological. Moreover, it shows that this teleology can be measured in precise information-theoretic terms.

Comments
kairosfocus, Your comments nearly all the time contain very useful information. However, they are also nearly all the time extremely long and to use an old expression, "contain everything but the kitchen sink" and because of that become less useful because many will not read through them or get lost on dealing with peripheral issues. I would use part of your name here, namely the focus part, and make your comments shorter and to the point. Over time all your points will get made several times but it is difficult to pick out the most relevant at the moment. For example, somewhere deep in your post you discuss active information and that seems to be a matter of misunderstanding here. The post would be much more effective if limited to this one concept. The other items could come out piece by piece if necessary when something get challenged. You are a valuable resource for us here but the tendency to quote the entire encyclopedia each time makes you less effective. I personally would be interested in a thorough discussion of active information and LCI and just what these means in lay terms. Otherwise the thread will go off on any merry way one tries to steer it. Your discussion of active information immediately did away with any confusion of it with meaningful information which was a potential hijack of the thread.jerry
May 4, 2009
May
05
May
4
04
2009
06:46 AM
6
06
46
AM
PDT
Mr Joseph, There aren’t any vision systems to search for. Indeed, which has allowed many different vision systems to be discovered. There is no one, perfect vision system planned out for all creatures, nor even one system per niche. The distribution of vision systems makes sense given common descent and nested hierarchies, however.Nakashima
May 4, 2009
May
05
May
4
04
2009
06:46 AM
6
06
46
AM
PDT
Mark Frank -- A small boulder is dropped at a random location on the slope . . . The active information content in that event would be an interesting thing to ponder. The probabilities of it reaching its rest shouldn't be much different between chance and design. If you knew the mass of the boulder, the resistance of the slope, the force at which it was set in motion etc. you should be able to calculate exactly where it stops without regard for chance or design. So the probability as to where that boulder ends is 1, just as it were designed, so I guess that means there would be little active information in the event.tribune7
May 4, 2009
May
05
May
4
04
2009
06:33 AM
6
06
33
AM
PDT
Mark Frank and Joseph: I think this is where we fundamentally differ -- on whether evolution has targets. You say no. Mark thinks that a river finding its lowest place or a rock tumbling down a hill and finding its lowest place is more representative of evolution. Bob and I, on the other hand, see evolutionary processes traversing configuration spaces and locating things like bacterial flagella. These satisfy independent functional specifications. These specifications are not least action principles like those in physics and of the sort that you are putting forward. They are specifications of mechanical systems that require engineering principles to characterize and understand. So how is it that evolutionary processes managed to locate them? Our answer in a nutshell: active information. Our paper develops that answer. Addendum: I need to add this to the previous remark, namely, when a rock finds a particular valley because it rolls down some hills, we might say the hills provide the information for it to do so. Fair enough. The hills, in our terminology, supply an alternative search. But an alternative search for what and in comparison to what probabilistic baseline for blind search? Presumably, the blind search that forms the contrast class to the alternative search is a random search over a flat landscape, any point of which is equiprobable for the rock to land. But what was it about that point that made it salient? In the case of life, independent functional requirements make the points of biological configuration space salient regardless whether the search is blind or directed. In the case of the rock, where it lands by null (blind) or alternative search is a matter of indifference -- unless one adds an independent functional requirement. If, for instance, the rock landed precisely where treasure was buried, then the information from the hills landscape would be relevant to design (most likely, in this case, the treasure burier would have chosen the place to bury treasure on the basis of a least action principle). So, even though the concept of specification is not explicit in this paper, it is there implicitlyWilliam Dembski
May 4, 2009
May
05
May
4
04
2009
05:41 AM
5
05
41
AM
PDT
beelzebub, Can a blind search find a target that doesn't exist, even given a small space to search?Joseph
May 4, 2009
May
05
May
4
04
2009
05:12 AM
5
05
12
AM
PDT
Mark Frank, Good point- in biological evolution per the MET there isn't any target beyond "survival". There aren't any bacterial flagella to search for. There aren't any vision systems to search for. There is no way to know if flagella or vision systems are even obtainable given a starting population or populations that never had either. So there isn't any search- nevermind a search for structures that don't exist. Don't you think this poses a bigger question mark for your position?Joseph
May 4, 2009
May
05
May
4
04
2009
04:57 AM
4
04
57
AM
PDT
PPS: Re MF: the key idea is that many, many configs of organic molecules are possible, but very few of these will perform as a viable biofunctioning cell [multicellular organisms being just that]. To do that, they have to be very specifically organised, and the parts have to be just as specifically composed [esp. proteins]. To be self-replicating, there has to be a blueprint that stores information on how to build the machines step by step, including how to replicate the blueprint and preserve it from rapid deterioration. And, such has to be both encapsulated and joined to physical implementation machines -- in effect we have highlighted that a living cell has to have in it a computer guiding an automaton. All of this, so constructed as to work and compete in a given ecosystem or pre-biotic environment.kairosfocus
May 4, 2009
May
05
May
4
04
2009
03:44 AM
3
03
44
AM
PDT
Pardon a footnote: On accounting vs wealth creation -- hopefully relevant. In an accounting system, sums of money move around between accounts in such a way that ASSETS = LIABILITIES (where the latter incorporates owner equity). That is always true, whether we are dealing with a wealth making or a wealth destroying enterprise; even a Ponzi scheme. (The difference between the two is a matter of creative function in a given environment -- as a rule, a major issue of highly intelligent design. [Translating: it is possible to make a lot of money by blind luck, but that is highly improbable.] As a rule, wealth creating enterprises inject very intelligent organisation, which gives a context in which there is a build up of new customer accounts such that on a sustained basis, overall sales less costs of sales and expenses gives rise to a healthy profit, which when accumulated is money-denominated wealth. So, the accounting equation is not violated, but associated with the algebra [and T- accounts and balance sheets and income statements etc are all in effect applications of algebra, with various conventions and generally accepted principles of praxis] is a real world process that is the root of growing wealth, a process that is as a rule highly designed. (On Reaganomics, Thatchernomics etc, I will simply say that from the time when the major Western nations decided to take the brunt of the recession at the turn of the 1980's to break the stagflation spiral of the 1970's [complete with the infamous Phillips curve gone mad as workers built in more and more inflation expectations into wage demands], the world has moved to a much lower inflation, sustained economic growth regime. Just as, until those two worthies came along, it was thought that the USSR was a more or less permanent destabilising factor in the world, one armed with a few dozens of thousands of nukes, north of 40 - 50,000 tanks (many aimed at the Fulda Gap) and global ambitions backed up by decades of geostrategic power plays, which at the time were plainly winning the global balance of power. In my native land, that global contest came down to an unofficial civil war And, once the oil crises of the 1970's hit, economic trend lines became a lot less predictable; indeed trend line based "forecasting" lost its lustre, not only in economics but in practical management. [So, I think a fairer, more balanced reading of the 1980's than has been suggested above is indicated.]) Similarly [but not equivalently], the Dembski-Marks paper is pointing out that while in principle functional information can materialise out of lucky noise, the odds are very much against it. And, taking the 43 generation search now notorious Weasel 86 program example as a case in point [whether partitioned search form, which is a valid interpretation, or implicitly latched form makes no practical difference . . . ] we see that odds of 1 in 10^40 or so of hitting the Weasel sentence on one guess fall to near certainty of hitting it in say 100 generations on a suitably latched ratcheting, cumulative search. That is, someone has hit on a way to make what is credibly practically infeasible to something that is a practical proposition. How? ANS: By injecting a well-tuned search. But, surprise, that search has in it a lot of information on the target and how to get there. So, we have now opened a second information account: the search account. And, it turns out that while we can in principle get to a good search by chance, the odds, in general, are even longer than that of hitting the original target in one guess. So, we now have a search for a search. Which of course can go off on an infinite regress . . . or else truncates somewhere. Where? ANS 2: In general, by observation, we see that the successful search for a well-tuned search (at whatever level of observable regress ultimately applies) that involves functionally specific and complex information [recall 500 - 1,000 bits is a reasonable universal threshold for complexity . . . the cosmos as a search engine would most likely be stumped to find islands of function if they require at least that much information . . . ] is carried out by intelligent designers. (So, the new information has come from a fresh account, not out of the magic of lucky noise. There is no free lunch here.) But, what are (a) information, (b) functionally specific complex information [FSCI], (c) function, & (d) intelligence?
a --> Information, per the UD glossary [courtesy materialism-leaning Wikipedia cited as admission against presumed interest]: “ . . that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message . . . . In terms of data, it can be defined as a collection of facts [i.e. as represented or sensed in some format] from which conclusions may be drawn [and on which decisions and actions may be taken].” (I think we need to insist that people reckon with the WACs and glossary, instead of recycling long since cogently answered objections and infinite regresses of demands for definitions etc. Also, observe, the just cited definition implicitly terminates on iconic exemplars, and builds in the implication that if something looks sufficiently like that, it is information. [Which brings us to the value and legitimacy of reasoning by key examples and sufficiently similar cases, i.e of analogy. To blanket reject analogous reasoning is to fall into self-referentially inconsistent, self-refuting, selectively hyperskeptical absurdity. For, we form many key concepts -- learn -- by analogous reasoning.]) b --> FSCI, per same glossary: "complex functional entities that are based on specific target-zone configurations and operations of multiple parts with large configuration spaces equivalent to at least 500 – 1,000 bits; i.e. well beyond the Dembski-type universal probability bound." Onward, TBO in the 1984 TMLO -- the first technical level ID book -- summarise the conclusions of OOL researchers by the early 1980's: “. . . “order” is a statistical concept referring to regularity such as could might characterize a series of digits in a number, or the ions of an inorganic crystal. On the other hand, “organization” refers to physical systems and the specific set of spatio-temporal and functional relationships among their parts. Yockey and Wickens note that informational macromolecules have a low degree of order but a high degree of specified complexity.” [TMLO (FTE, 1984), Ch 8, p. 130.] c --> Function: Here, as TBO summarise, we discuss systems that transform inputs to yield outputs and/or outcomes. In the relevant context, we therefore have parts that are integrated in accord with a pattern across space and time [spatio-temporal . . . relationships"] -- an architecture, which must fulfill a certain criterion: it must "work" in some context ["functional relationships"]. So, it must be organised in such a way as to foster the transformation of inputs into outputs, yielding advantageous outcomes. Many examples are observed in the world of technology, and also in biology. In cases of multipart irreducible complexity (for a core, the removal of any one part destroys function) and of complex specification (the arrangements of parts store large quantities of information) of known origin, such entities are designed, and indeed, it is hard to see how such function can reasonably -- per search space challenges -- come about apart from intelligent input. [Notice the termination of the chain of definition on a study of empirical examples and an inductive generalisation therefrom.) d --> intelligence per the same glossary, and courtesy Wiki again: “capacities to reason, to plan, to solve problems, to think abstractly, to comprehend ideas, to use language, and to learn.” (Note, tis is in the end defined relative to observed cases and generalisations therefrom. if a new case is sufficiently close on family resemblance criteria, it will be accepted as a new case of intelligence.]
So, we see that as a practical matter, and per probability, a first level search space challenge can be solved by using intelligence to cut down the effective search space. To do so requires creation of a well-tuned search algorithm, which is a higher order instance of functionally specific complex information. So, the search space problem has not gone away, just moved up a level. We either face an infinite regress, or an appeal to increasingly improbable lucky noise, or else we can simply revert to what we observe: such chains of higher order searches tend to terminate in the work of an observed intelligence, with high reliability on empirical investigation. [E.g. None of eh evolutionary computing algorithms are credibly listed as originating in lucky noise.] And, we may define the concept of active information as the information gap between what is achievable on a random one-step search algorithm, and what is achievable on a well-tuned search. The increment in information that makes the search practicable in the real world is the injected active information, which can be quantified, per accounting principles [noting that we are dismissing lucky noise as empirically incredible]. So, it turns out that such active information is as a general rule traceable to intelligence; lucky noise not being a credible source once we are past a threshold of relevant complexity. [Evolutionary materialists: you can knock down the above by simply providing a good counter-example . . . ] All of which brings us back tot he significance of identified signs of intelligence,and that active information joins the list of such signs -- indeed, it brings out the force of CSI/FSCI and IC, as well as linguistic textual information and algorithmically functional information. GEM of TKI PS: Odds of 1 in 10^40 come down to 1 of 2^133 choices. So, if we have a functionally anchored, uniquely functional state requiring 133 bits to store, it is not a feasible search on the span of the earth's resources. Using the same squaring trick as was used with the UPB, we see that 266 bits sets up a second order search of 10^80 cells, so it is very unlikely that we would find something as "common" as 10^40 functional configs in that space. (That is, since there probably have not been that many proteins formed on earth since its origin, we have a credibly insurmountable search for proteins of length sufficient to absorb that many bits. That looks like about 62 AAs. OOL on earth is in trouble, and the origin of novel body plans -- which require a lot of functionally integrated proteins to work -- is in trouble.)kairosfocus
May 4, 2009
May
05
May
4
04
2009
03:33 AM
3
03
33
AM
PDT
I have loads of questions about this paper but here is a basic one. The paper is based on the idea of a target and a function which tries to find that target. But in the case of biology the "target" and the function are the same thing. Put simply: the function is "will survive" and the target is "will survive". I will try to demonstrate this with an analogy. Imagine a different "search space". A steep but uneven slope. A small boulder is dropped at a random location on the slope (all positions are equally likely). But of the course the boulder will bounce and tend to fall downwards. You could treat this as the boulder exploring a search space for a target. The (continuous) search space is the entire area of the slope. The target is the bottom of the slope. This area is tiny compared to the area of the slope. The proportion is p. However, there is a rather high probability that the boulder will end up at the bottom of the slope (not quite 1, it might become lodged again) because of the laws of gravity. Call this probability that the boulder will reach the bottom q. Does it make sense to talk of a separate fitness function - gravitational attraction - which is used to search for lower positions? Does the LCI imply that the law of gravity has an information content of log(q/p)? Compare this to the situation where you are looking for a particular buried treasure somewhere in the slope. You are then given the information that with probability q the treasure is at the bottom of the slope. Now it makes sense to differentiate the search function and the target and to talk about the additional information that you have been given. I submit that the case of evolution is much closer to the first than the second case.Mark Frank
May 4, 2009
May
05
May
4
04
2009
02:44 AM
2
02
44
AM
PDT
A final point for the night: You and Dr. Marks state that "searches, in successfully locating a target, cannot give out more information than they take in." Yet blind searches can successfully locate a target without any information input at all. They can even do this rather quickly when the search space is small. If so, is the LCI really a law?beelzebub
May 4, 2009
May
05
May
4
04
2009
12:52 AM
12
12
52
AM
PDT
Dr. Dembski, Since you didn't respond to it, are you ceding my point about the incommensurability of active information and Shannon information? Moving on, you argue that the LCI is not tautologous because information input occurs in a higher-order search space than information output, so that p and q are different for the input and output when calculating the active information. I thought I would try to apply this idea to the two scenarios you mention in your paper: an OOL experiment, and evolution itself. In doing so, I ran into trouble right away. Fitness landscapes are what distinguish Darwinian evolution from a blind search. To apply the concept of active information to evolution, then, we must compare the probability of finding a target using blind search to the probability of finding it under a specified fitness regime. One problem with this is that Darwinian evolution does not specify targets in advance. In fact, the target space is determined by the fitness regime. If you change the fitness regime, the target space changes. Second, the fitness regime is determined by the physical environment(s) in which evolution takes place. For a complete accounting of the active information in the environment, we need to account for the active information in the search that "found" the environment. How can this be done without knowing the search space of all possible universes, plus the search algorithm that was used to "find" ours, plus the size of the target space within the space of all possible universes? Third, the active information of a Darwinian "search" is not independent of the target. A particular fitness regime therefore contains very little active information with respect to some targets, and a huge amount with respect to others. Turning to the OOL example, you and Dr. Marks write that "Tracking and measuring active information to verify intelligent design is readily achieved experimentally," and you propose the idea of measuring the active information of the chemicals used in OOL experiments and comparing it to the active information of the target molecule(s). In trying to apply the idea of active information here, I ran into more problems: what does a "blind search" for purified chemicals look like? What is its probability of success? Do I also need to calculate the active information of the glass beakers used to carry out the experiment? What, again, about the active information of the universe in which the experiment takes place? What does a blind search through the space of all possible universes look like? I'm skeptical that all of this is "readily achieved experimentally."beelzebub
May 4, 2009
May
05
May
4
04
2009
12:43 AM
12
12
43
AM
PDT
Previous comment error. S/B ...assigning information content to...Alan Fox
May 3, 2009
May
05
May
3
03
2009
11:56 PM
11
11
56
PM
PDT
Would Dr Demski have time to comment on whether his concept of "active information" has any equivalence to "meaningful information" as discussed upthread? Does he agree about the difficulty of assigning information quantitatively to, for example, DNA sequences without a priori knowledge of their potential functionality?Alan Fox
May 3, 2009
May
05
May
3
03
2009
11:54 PM
11
11
54
PM
PDT
{Correction above} ^ ...something more real and measurable about "information" than there is about "chance."Frost122585
May 3, 2009
May
05
May
3
03
2009
11:02 PM
11
11
02
PM
PDT
And Bill if I could invoke here a little Kant with his synthetic vs analytic modes of judgment and reasoning and how it can be used to shed light on the relevance of the two mechanisms. First I note that the arugment for chance mutation is inherently weak because it synthetically applies arrangement to already existent systems. In other words it applies a synthetic explanation of chance to the variables within a model. Now take the position of information as the primary explnaaiton for novelty- information merely takes what is already there then critcally analyzes it - and puts the data into a comprehendible model. So the invocation of chance as a mechanism is inherently synthetic- the God of their dreams - hence my favorite slang for the new atheists, "chance worshipers." Now albeit the law of conservation of information is also synthetic- and hence the inference to necessary influx of novelty from "elsewhere" is also synthetic- but the fundamental back bone of the theory rests on pure empirical experience. That is the synthetic argument of the conservation of information is rooted in mathematical rationalization- hence it is valid and sound as 2+2= 4. Not perfect but for all practical purposes strongly cogent. So the bottom line is that the Darwinian Evo model is an obviously inherently atheistically (or to be scientific and philosophical opposed to theological) "anti-teleology" driven "synthetic" explanation for life's origin. So my point is that the DE model is not about the data but the interpretation of it. We are weighing here informational based construction vs random chance based construction. My argument is that information is much closer to an analytic judgment than chance is. I feel like appealing to Locke here- as if to say there is something "more real" and "measurable" about the conception of chance than there is of information. I conclude that the ID model of informatics is apparently much more scientifically sound than that of the DE model.Frost122585
May 3, 2009
May
05
May
3
03
2009
10:50 PM
10
10
50
PM
PDT
Bill you wrote, "the Law of Conservation of Information shows that Darwinian evolution is inherently teleological." To nit pick though- this is actually a contradictory statement. What is meant here is that Evolutionary theory must be inherently teleological and hence a neo-Darwinian viewpoint is fundamentally flawed in respect to the law of conservation of information. In other words you are posting and pitting information against chance as the mechanical explanation for the origin of novelty. No?Frost122585
May 3, 2009
May
05
May
3
03
2009
10:15 PM
10
10
15
PM
PDT
Writing under a supposition does not mean accepting it.
Fair enough to take an opposing point of view but it didn't become clear. Arguments at UD appeared often arbitrary and sometimes contradicting during recent months (e.g. the Hitler-Darwin discussion, dispending and reinstating the EF). I doubt that it is currently possible for readers who come here the first time to understand what ID is about. I am afraid that in its current state UD isn't
Serving the Intelligent Design Community
sparc
May 3, 2009
May
05
May
3
03
2009
09:23 PM
9
09
23
PM
PDT
Beelzebub: The three conservation of information theorems proven in the paper are elementary but they are not trivial, as you suggest -- that should be evident from the fact that the third of these theorems proves and then extends the standard no free lunch theorem. As it is, the active information input occurs in a higher-order search space, the active information output occurs in the original search space.William Dembski
May 3, 2009
May
05
May
3
03
2009
05:55 PM
5
05
55
PM
PDT
In the comment above, the "less than or equal sign" got eaten by WordPress despite showing up correctly in the preview window. The tautologous statement should read as follows: log(q/p) <= log(q/p)beelzebub
May 3, 2009
May
05
May
3
03
2009
05:15 PM
5
05
15
PM
PDT
On another topic, commenter DiPietro at AtBC has pointed out an apparent circularity in your paper. I elaborate on this below. You write:
Active information is to informational accounting what the balance sheet is to financial accounting. Just as the balance sheet keeps track of credits and debits, so active information keeps track of inputs and outputs of information, making sure that they receive their proper due.
You define active information as log(q/p), where p and q are the probabilities of success of the null search and the alternate search, respectively. But if the active information of the "output" is defined as log(q/p) and the active information of the "input" is defined as log(q/p), where p and q refer to the probabilities of success of the same null search and alternate search, respectively, then p is the same for input and output, and so is q. The LCI then reduces to this tautology: log(q/p) ? log(q/p) This seems like a fatal problem. Later in the paper you seem to offer a way out in your discussion of plans for experimental verification of the LCI:
Tracking and measuring active information to verify intelligent design is readily achieved experimentally. Consider, for instance, that whenever origin-of-life researchers use chemicals from a chemical supply house, they take for granted information-intensive processes that isolate and purify chemicals. These processes typically have no analogue in realistic prebiotic conditions. Moreover, the amount of information these processes (implemented by smart chemists) impart to the chemicals can be calculated. This is especially true for polymers, whose sequential arrangement of certain molecular bases parallels the coded information that is the focus of Shannon's theory of communication.
The problem is, you seem to be equivocating on the word "information". The LCI applies to active information, but here you are referring to the Shannon information of polymers. Nowhere in the paper do you show that active information and Shannon information are equivalent or commensurable. If you try to fix this by calculating the active information of the polymers, rather than the Shannon information, then you run into the fatal tautology problem elucidated above. Could you comment?beelzebub
May 3, 2009
May
05
May
3
03
2009
05:11 PM
5
05
11
PM
PDT
Dr. Dembski, I understand what it is to assume something for the sake of argument, and I do it myself quite often. I'm just surprised that you didn't state that the assumption was contrary to your own position, when a short disclaimer would have made this clear, e.g.:
The authors remain skeptical that Darwinian evolution explains the full diversity of life on earth. However, we show in this paper that if it does, it is necessarily teleological.
beelzebub
May 3, 2009
May
05
May
3
03
2009
04:07 PM
4
04
07
PM
PDT
Beelzebub: This paper was written under the supposition that common descent holds and that natural selection is the principal mechanism behind it. Writing under a supposition does not mean accepting it. My own views of the truth of the matter are clearly spelled out in THE DESIGN OF LIFE (www.thedesignoflife.com). In particular, I think that irreducible complexity at the molecular level (especially in the origin of DNA and protein synthesis) provides compelling evidence for discontinuity in the history of life. William Dembski
May 3, 2009
May
05
May
3
03
2009
02:28 PM
2
02
28
PM
PDT
Dr. Dembski, As far as I can see, nowhere in the paper do you and Dr. Marks express skepticism regarding the ability of Darwinian evolution to account for the diversity of life. Rather, you seem to grant that Darwin was right that random mutation and natural selection are sufficiently powerful, provided that fitness has a teleological origin. That seems like a huge departure from your former indictment of Darwinian theory as flawed and unsupported by the evidence. Have you in fact shifted your position? How has the ID camp reacted to your paper?beelzebub
May 3, 2009
May
05
May
3
03
2009
11:47 AM
11
11
47
AM
PDT
Nakashima: The relevant m is the number of steps it takes Dawkins's algorithm to converge with high probability on the target (i.e., METHINKS*IT*IS*LIKE*A*WEASEL). That m is less than 100.William Dembski
May 3, 2009
May
05
May
3
03
2009
10:26 AM
10
10
26
AM
PDT
Allen, You are correct- my apologies. Now to the point- Reducibility- as in can biological information be reduced to matter, energy, chance and necessity? IOW all YOU have to do to refute Dembski/ Marks, and ALL of ID is to demonstrate such reducibility. However given the paper by Lincoln and Joyce on sustained RNA replication, the reducibility argument is in serious trouble. It would also help your position if you actually knew what it is that makes organisms what they are. Where is that information? And can it be altered in such a way to account for the diversity of living organisms? For example with eyes PAX6 can be transferred from mice to fruit-flies but the fruitflies develop fruitfly eyes. And even though we know a great deal more about eyes/ vison systems that Darwin did, the "evidence" for their evolution is still the same. Doesn't that make you wonder, even just a little, that your position isn't up to the task?Joseph
May 3, 2009
May
05
May
3
03
2009
09:28 AM
9
09
28
AM
PDT
Allen, I think maybe we could come up with a general context for biological meaning that is independent of specific organisms. It would involve protein shape-space and would be theoretical binding rather than context dependent in the manner you suggest. Hopefully once we can accurately model structure and binding for proteins and other biomolecules then we could also predict any function it might have from its structure. This would include protein-protein binding. We can already do this to a limited extent. I get these references from Behe's Edge of Evolution and refer to research on the binding profiles of antibodies generated by the immune system. Perelson, A. S., and Oster, G. F. 1979. J.Theor.Biol. 81:645-70 Segel, L. A., and Perelson, A. S. 1989. Immunol. Lett. 22:91-99 De Boer, R. J., and Perelson, A. S. 1993. Proc.Biol.Sci. 252:171-75 Smith, D. J., Forrest, S., Hightower, R. R., and Perelson, A. S. 1997. J. Theor.Biol. 189:141-50.tragic mishap
May 3, 2009
May
05
May
3
03
2009
09:10 AM
9
09
10
AM
PDT
Ha! I just now got to the section of the article titled "Entropy" (pg. 26). Dr. Dembski, are you suggesting that constant influx of information provided by intelligence can counteract the effects of the second law? Or perhaps not directly counteract but blunt those effects? "It seems, then, that information as characterized by the Law of Conservation of Information may be regarded as inverse to entropy: increased information indicates an increased capacity for conducting successful search whereas increased entropy indicates a decreased capacity for doing the work necessary to conduct a search." Analogy would be a computer that is slowly losing flops with an intelligent programmer constantly increasing the information content of the search algorithms to compensate.tragic mishap
May 3, 2009
May
05
May
3
03
2009
08:59 AM
8
08
59
AM
PDT
Dr Dembski, Hmmm. m = 10^40 (maximal number of queries) old p = 10^-40 1- old p = close to 1, but still less than 1 (1-old p)^m = close to 0 1-(1-old p)^m = close to 1 = new p Then q/new p is going to close to 1 also. Very different result than the previous calculation. Unless there is a new q also? Getting on a plane now, will look at the thread again in about 8 hours. Thanks again for your reply.Nakashima
May 3, 2009
May
05
May
3
03
2009
08:33 AM
8
08
33
AM
PDT
Nakashima: the new p for the Cartesian product becomes 1-(1-p)^m (for m = 1 this is just p), the latter p being the old p.William Dembski
May 3, 2009
May
05
May
3
03
2009
08:03 AM
8
08
03
AM
PDT
Dr. Dembski, Thank you for such a quick response! I'm afraid I have to ask you to unpack it a little for me. Can you spell out the new definitions of p and q? Especially given that the queries resulting from the evolutionary algorithm are contingent on previous queries, I'm not sure how to normalize them into a single query. Perhaps I have misunderstood.Nakashima
May 3, 2009
May
05
May
3
03
2009
07:38 AM
7
07
38
AM
PDT
1 2 3 4 5 6 7

Leave a Reply