Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 11: Borel’s Infinite Monkeys analysis and the significance of the log reduced Chi metric, Chi_500 = I*S – 500

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 (Series)

Emile Borel, 1932

Emile Borel (1871 – 1956) was a distinguished French Mathematician who — a son of a Minister — came from France’s Protestant minority, and he was a founder of measure theory in mathematics. He was also a significant contributor to modern probability theory,  and so Knobloch observed of his approach, that:

>>Borel published more than fifty papers between 1905 and 1950 on the calculus of probability. They were mainly motivated or influenced by Poincaré, Bertrand, Reichenbach, and Keynes. However, he took for the most part an opposed view because of his realistic attitude toward mathematics. He stressed the important and practical value of probability theory. He emphasized the applications to the different sociological, biological, physical, and mathematical sciences. He preferred to elucidate these applications instead of looking for an axiomatization of probability theory. Its essential peculiarities were for him unpredictability, indeterminism, and discontinuity. Nevertheless, he was interested in a clarification of the probability concept. [Emile Borel as a probabilist, in The probabilist revolution Vol 1 (Cambridge Mass., 1987), 215-233. Cited, Mac Tutor History of Mathematics Archive, Borel Biography.]>>

Among other things, he is credited as the worker who introduced a serious mathematical analysis of the so-called Infinite Monkeys theorem (just a moment).

So, it is unsurprising that Abel, in his recent universal plausibility metric paper, observed  that:

Emile Borel’s limit of cosmic probabilistic resources [c. 1913?] was only 1050 [[23] (pg. 28-30)]. Borel based this probability bound in part on the product of the number of observable stars (109) times the number of possible human observations that could be made on those stars (1020).

This of course, is now a bit expanded, since the breakthroughs in astronomy occasioned by the Mt Wilson 100-inch telescope under Hubble in the 1920’s. However,  it does underscore how centrally important the issue of available resources is, to render a given — logically and physically strictly possible but utterly improbable — potential chance- based event reasonably observable.

We may therefore now introduce Wikipedia as a hostile witness, testifying against known ideological interest, in its article on the Infinite Monkeys theorem:

In one of the forms in which probabilists now know this theorem, with its “dactylographic” [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel‘s 1913 article “Mécanique Statistique et Irréversibilité” (Statistical mechanics and irreversibility),[3] and in his book “Le Hasard” in 1914. His “monkeys” are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel’s image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[4]

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Let us emphasise that last part, as it is so easy to overlook in the heat of the ongoing debates over origins and the significance of the idea that we can infer to design on noticing certain empirical signs:

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Why is that?

Because of the nature of sampling from a large space of possible configurations. That is, we face a needle-in-the-haystack challenge.

For, there are only so many resources available in a realistic situation, and only so many observations can therefore be actualised in the time available. As a result, if one is confined to a blind probabilistic, random search process, s/he will soon enough run into the issue that:

a: IF a narrow and atypical set of possible outcomes T, that

b: may be described by some definite specification Z (that does not boil down to listing the set T or the like), and

c: which comprise a set of possibilities E1, E2, . . . En, from

d: a much larger set of possible outcomes, W, THEN:

e: IF, further, we do see some Ei from T, THEN also

f: Ei is not plausibly a chance occurrence.

The reason for this is not hard to spot: when a sufficiently small, chance based, blind sample is taken from a set of possibilities, W — a configuration space,  the likeliest outcome is that what is typical of the bulk of the possibilities will be chosen, not what is atypical.  And, this is the foundation-stone of the statistical form of the second law of thermodynamics.

Hence, Borel’s remark as summarised by Wikipedia:

Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

In recent months, here at UD, we have described this in terms of searching for a needle in a vast haystack [corrective u/d follows]:

let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system’s 10^57 atoms would undergo ~ 10^87 “chemical time” states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let’s do an illustrative haystack calculation:

 Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.

With this in mind, we may now look at the Dembski Chi metric, and reduce it to a simpler, more practically applicable form:

m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]. χ is “chi” and ϕ is “phi”

n:  To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from.)

o: So, since 10^120 ~ 2^398, we may do some algebra as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

Chi = – log2(2^398 * D2 * p), in bits

Chi = Ip – (398 + K2), where log2 (D2 ) = K2

p: But since 398 + K2 tends to at most 500 bits on the gamut of our solar system [our practical universe, for chemical interactions! (if you want , 1,000 bits would be a limit for the observable cosmos)] and

q: as we can define a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

Chi_500 =  Ip*S – 500, in bits beyond a “complex enough” threshold

(If S = 0, Chi = – 500, and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: A string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.)

r: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was intelligently designed. (For instance, no-one would dream of asserting seriously that the English text of this post is a matter of chance occurrence giving rise to a lucky configuration, a point that was well-understood by that Bible-thumping redneck fundy — NOT! — Cicero in 50 BC.)

s: The metric may be directly applied to biological cases:

t: Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

u: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

v: Therefore, we have at least one possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] .

But, but, but . . . isn’t “natural selection” precisely NOT a chance based process, so doesn’t the ability to reproduce in environments and adapt to new niches then dominate the population make nonsense of such a calculation?

NO.

Why is that?

Because of the actual claimed source of variation (which is often masked by the emphasis on “selection”) and the scope of innovations required to originate functionally effective body plans, as opposed to varying same — starting with the very first one, i.e. Origin of Life, OOL.

But that’s Hoyle’s fallacy!

Advice: when you go up against a Nobel-equivalent prize-holder, whose field requires expertise in mathematics and thermodynamics, one would be well advised to examine carefully the underpinnings of what is being said, not just the rhetorical flourish about tornadoes in junkyards in Seattle assembling 747 Jumbo Jets.

More specifically, the key concept of Darwinian evolution [we need not detain ourselves too much on debates over mutations as the way variations manifest themselves], is that:

CHANCE VARIATION (CV) + NATURAL “SELECTION” (NS) –> DESCENT WITH (UNLIMITED) MODIFICATION (DWM), i.e. “EVOLUTION.”

CV + NS –> DWM, aka Evolution

If we look at NS, this boils down to differential reproductive success in environments leading to elimination of the relatively unfit.

That is, NS is a culling-out process, a subtract-er of information, not the claimed source of information.

That leaves only CV, i.e. blind chance, manifested in various ways. (And of course, in anticipation of some of the usual side-tracks, we must note that the Darwinian view, as modified though the genetic mutations concept and population genetics to describe how population fractions shift, is the dominant view in the field.)

There are of course some empirical cases in point, but in all these cases, what is observed is fairly minor variations within a given body plan, not the relevant issue: the spontaneous emergence of such a complex, functionally specific and tightly integrated body plan, which must be viable from the zygote on up.

To cover that gap, we have a well-known metaphorical image — an analogy, the Darwinian Tree of Life. This boils down to implying that there is a vast contiguous continent of functionally possible variations of life forms, so that we may see a smooth incremental development across that vast fitness landscape, once we had an original life form capable of self-replication.

What is the evidence for that?

Actually, nil.

The fossil record, the only direct empirical evidence of the remote past, is notoriously that of sudden appearances of novel forms, stasis (with some variability within the form obviously), and disappearance and/or continuation into the modern world.

If by contrast the tree of life framework were the observed reality, we would see a fossil record DOMINATED by transitional forms, not the few strained examples that are so often triumphalistically presented in textbooks and museums.

Similarly, it is notorious that fairly minor variations in the embryological development process are easily fatal. No surprise, if we have a highly complex, deeply interwoven interactive system, chance disturbances are overwhelmingly going to be disruptive.

Likewise, complex, functionally specific hardware is not designed and developed by small, chance based functional increments to an existing simple form.

Hoyle’s challenge of overwhelming improbability does not begin with the assembly of a Jumbo jet by chance, it begins with the assembly of say an indicating instrument on its cockpit instrument panel.

The D’Arsonval galvanometer movement commonly used in indicating instruments; an adaptation of a motor, that runs against a spiral spring (to give proportionality of deflection to input current across the magnetic field) which has an attached needle moving across a scale. Such an instrument, historically, was often adapted for measuring all sorts of quantities on a panel.

(Indeed, it would be utterly unlikely for a large box of mixed nuts and bolts, to by chance shaking, bring together matching nut and bolt and screw them together tightly; the first step to assembling the instrument by chance.)

Further to this, It would be bad enough to try to get together the text strings for a Hello World program (let’s leave off the implementing machinery and software that make it work) by chance. To then incrementally create an operating system from it, each small step along the way being functional, would be a bizarrely operationally impossible super-task.

So, the real challenge is that those who have put forth the tree of life, continent of function type approach, have got to show, empirically that their step by step path up the slopes of Mt Improbable, are empirically observable, at least in reasonable model cases. And, they need to show that in effect chance variations on a Hello World will lead, within reasonable plausibility, to such a stepwise development that transforms the Hello World into something fundamentally different.

In short, we have excellent reason to infer that — absent empirical demonstration otherwise — complex specifically functional integrated complex organisation arises in clusters that are atypical of the general run of the vastly larger set of physically possible configurations of components. And, the strongest pointer that this is plainly  so for life forms as well, is the detailed, complex, step by step information controlled nature of the processes in the cell that use information stored in DNA to make proteins.  Let’s call Wiki as a hostile witness again, courtesy two key diagrams:

I: Overview:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA

II: Focusing on the Ribosome in action for protein synthesis:

The Ribosome, assembling a protein step by step based on the instructions in the mRNA “control tape” (the AA chain is then folded and put to work)

Clay animation video [added Dec 4]:

[youtube OEJ0GWAoSYY]

More detailed animation [added Dec 4]:

[vimeo 31830891]

This sort of elaborate, tightly controlled, instruction based step by step process is itself a strong sign that this sort of outcome is unlikely by chance variations.

(And, attempts to deny the obvious, that we are looking at digital information at work in algorithmic, step by step processes, is itself a sign that there is a controlling a priori at work that must lock out the very evidence before our eyes to succeed. The above is not intended to persuade such, they are plainly not open to evidence, so we can only note how their position reduces to patent absurdity in the face of evidence and move on.)

But, isn’t the insertion of a dummy variable S into the Chi_500 metric little more than question-begging?

Again, NO.

Let us consider a simple form of the per-aspect explanatory filter approach:

The per aspect design inference explanatory filter

 

You will observe two key decision nodes,  where the first default is that the aspect of the object, phenomenon or process being studied, is rooted in a natural, lawlike regularity that under similar conditions will produce similar outcomes, i.e there is a reliable law of nature at work, leading to low contingency of outcomes.  A dropped, heavy object near earth’s surface will reliably fall at g initial acceleration, 9.8 m/s2.  That lawlike behaviour with low contingency can be empirically investigated and would eliminate design as a reasonable explanation.

Second, we see some situations where there is a high degree of contingency of possible outcomes under initial circumstances.  This is the more interesting case, and in our experience has two candidate mechanisms: chance, or choice. The default for S under these circumstances, is 0. That is, the presumption is that chance is an adequate explanation, unless there is a good — empirical and/or analytical — reason to think otherwise.  In short, on investigation of the dynamics of volcanoes and our experience with them, rooted in direct observations, the complexity of a Mt Pinatubo is explained partly on natural laws and chance variations, there is no need to infer to choice to explain its structure.

But, if the observed configurations of highly contingent elements were from a narrow and atypical zone T not credibly reachable based on the search resources available, then we would be objectively warranted to infer to choice. For instance, a chance based text string of length equal to this post, would  overwhelmingly be gibberish, so we are entitled to note the functional specificity at work in the post, and assign S = 1 here.

So, the dummy variable S is not a matter of question-begging, never mind the usual dismissive talking points.

I is of course an information measure based on standard approaches, through the sort of probabilistic calculations Hartley and Shannon used, or by a direct observation of the state-structure of a system [e.g. on/off switches naturally encode one bit each].

And, where an entity is not a direct information storing object, we may reduce it to a mesh of nodes and arcs, then investigate how much variation can be allowed and still retain adequate function, i.e. a key and lock can be reduced to a bit measure of implied information, and a sculpture like at Mt Rushmore can similarly be analysed, given the specificity of portraiture.

The 500 is a threshold, related to the limits of the search resources of our solar system, and if we want more, we can easily move up to the 1,000 bit threshold for our observed cosmos.

On needle in a haystack grounds, or monkeys strumming at the keyboards grounds, if we are dealing with functionally specific, complex information beyond these thresholds, the best explanation for seeing such is design.

And, that is abundantly verified by the contents of say the Library of Congress (26 million works) or the Internet, or the product across time of the Computer programming industry.

But, what about Genetic Algorithms etc, don’t they prove that such FSCI can come about by cumulative progress based on trial and error rewarded by success?

Not really.

As a rule, such are about generalised hill-climbing within islands of function characterised by intelligently designed fitness functions with well-behaved trends and controlled variation within equally intelligently designed search algorithms. They start within a target Zone T, by design, and proceed to adapt incrementally based on built in designed algorithms.

If such a GA were to emerge from a Hello World by incremental chance variations that worked as programs in their own right every step of the way, that would be a different story, but for excellent reason we can safely include GAs in the set of cases where FSCI comes about by choice, not chance.

So, we can see what the Chi_500 expression means, and how it is a reasonable and empirically supported tool for measuring complex specified information, especially where the specification is functionally based.

And, we can see the basis for what it is doing, and why one is justified to use it, despite many commonly encountered objections. END

________

F/N, Jan 22: In response to a renewed controversy tangential to another blog thread, I have redirected discussion here. As a point of reference for background information, I append a clip from the thread:

. . . [If you wish to find] basic background on info theory and similar background from serious sources, then go to the linked thread . . . And BTW, Shannon’s original 1948 paper is still a good early stop-off on this. I just did a web search and see it is surprisingly hard to get a good simple free online 101 on info theory for the non mathematically sophisticated; to my astonishment the section A of my always linked note clipped from above is by comparison a fairly useful first intro. I like this intro at the next level here, this is similar, this is nice and short while introducing notation, this is a short book in effect, this is a longer one, and I suggest the Marks lecture on evo informatics here as a useful contextualisation. Qualitative outline here. I note as well Perry Marshall’s related exchange here, to save going over long since adequately answered talking points, such as asserting that DNA in the context of genes is not coded information expressed in a string of 4-state per position G/C/A/T monomers. The one good thing is, I found the Jaynes 1957 paper online, now added to my vault, no cloud without a silver lining.

If you are genuinely puzzled on practical heuristics, I suggest a look at the geoglyphs example already linked. This genetic discussion may help on the basic ideas, but of course the issues Durston et al raised in 2007 are not delved on.

(I must note that an industry-full of complex praxis is going to be hard to reduce to an in a nutshell. However, we are quite familiar with information at work, and how we routinely measure it as in say the familiar: “this Word file is 235 k bytes.” That such a file is exceedingly functionally specific can be seen by the experiment of opening one up in an inspection package that will access raw text symbols for the file. A lot of it will look like repetitive nonsense, but if you clip off such, sometimes just one header character, the file will be corrupted and will not open as a Word file. When we have a great many parts that must be right and in the right pattern for something to work in a given context like this, we are dealing with functionally specific, complex organisation and associated information, FSCO/I for short.

The point of the main post above is that once we have this, and are past 500 bits or 1000 bits, it is not credible that such can arise by blind chance and mechanical necessity. But of course, intelligence routinely produces such, like comments in this thread. Objectors can answer all of this quite simply, by producing a case where such chance and necessity — without intelligent action by the back door — produces such FSCO/I. If they could do this, the heart would be cut out of design theory. But, year after year, thread after thread, here and elsewhere, this simple challenge is not being met. Borel, as discussed above, points out the basic reason why.

Comments
I may be wrong, but I don't believe I intended to use the word censored. At least not in the official sense of the word. What I see every day is proclamations from the rank and file that ID does not discuss the identity or attributes of the designer. I get this when I ask for some conceptual framework for design, some demonstration that design is possible by a non-omniscient being. I have noticed that several regular posters (plus Behe) have said they believe the designer is God. If true, that certainly answers my questions. But that isn't the common position of posters here. I see several major conceptual problems with ID. The first is that ID advocates calculate probabilities based on the length of entire coding sequences rather than on the length of mutational changes occurring from one generation to the next. Worst case scenario, it would be the change apparent from nearest cousin species. I notice gpuccio uses this better metric. I object to his assumptions, but at least he has avoided the common error of using entire sequences. The second major conceptual problem is assuming that "intelligence" can somehow navigate a search space better than an genetic algorithm. This may be true for naive algorithms, but Koonin and Shapiro have highlighted the fact that evolution uses very sophisticated algorithms, with modes of variation that can leap across the valleys of function. These kinds of variation have been modeled in sophisticated industrial GAs. They work, and they are getting better. They can, for example, beat most expert human checker players after starting with no knowledge of the game other than the rules. And these are in their infancy. They will only get more sophisticated. The folding game has been mentioned, and I admit it is possible for humans to beat early generation GAs at some things. But the GAs will be improved. It's primarily a matter of tweaking the sources of variation. But unless you have feedback regarding differential reproductive success, you have no ability to design or tweak living things. The difficulty in even knowing that life is possible is inherent in the skepticism about origin of life research. The fact that ID advocates are skeptical about the success of such research indicates that knowing how to assemble simple replicators (or even knowing it is possible) would require something beyond intelligence. It would require a level of intelligence that we attribute to deities. So it is my opinion that ID stands as a coherent idea if it admits (as Behe does) that the the designer is God. If it does not admit this, and does not demonstrate, at least in principle, that design by finite beings is unlikely, then it is vacuous. I realize I have raised other questions about my position, but the post is already too long.Petrushka
January 27, 2012
January
01
Jan
27
27
2012
11:01 AM
11
11
01
AM
PDT
Petrushka, Have you not just changed the subject, completely retreating from your clearly worded and then repeated assertion that the question you asked is somehow censored?
Actually it is essential to know the capabilities of the agent that you are claiming to have created and maintained life.
By this statement you indicate again that you lack even a basic comprehension of the concept you are attempting to argue against. It is transparent that regardless of any attempts to explain it to you, you will cast ID as what it is not, because your argument against is founded on your misunderstanding of it. Do you realize that it is possible to formulate arguments against an accurate understanding of ID? It's possible. Others have done it. It just takes a little more work.
You already know and admit this because you demand it of evolution.
Yes, I demand that evolution explain exactly what it claims to explain. Should I not? Why not? What does ID have to do with the answer to that question? Pointing the finger at ID doesn't make the question go away.
Since evolution is increasingly able to fill in details of how large changes occur and how new structures are invented
It follows that if it can fill in details that it can fill ina detail. The converse it true. If it cannot fill in a detail, it cannot be said to fill in "details." In light of your above statement, is it unreasonable to ask how evolution 'fills in a detail of a how a large change occurs or how a new structure is invented?' You did use the word "detail," and probably wish you hadn't.ScottAndrews2
January 27, 2012
January
01
Jan
27
27
2012
08:48 AM
8
08
48
AM
PDT
Petrushka, We know the capabilities of unknown designers by studying the design they left behind. And we know the capabilities of "evolution" by what we observe and test. Unfortunately there isn't anything that says accumulations of random mutations can construct anything of note. And that is why it is vacuous. Thanks fer playin'...Joe
January 27, 2012
January
01
Jan
27
27
2012
08:25 AM
8
08
25
AM
PDT
Actually it is essential to know the capabilities of the agent that you are claiming to have created and maintained life. You already know and admit this because you demand it of evolution. Since evolution is increasingly able to fill in details of how large changes occur and how new structures are invented, it is silly to maintain a science fiction fantasy that aliens or whatever come to earth every million years or so to drop in a new protein domain. I note the change of tone in recent weeks regarding Koonin and Shapiro. Suddenly Dembski has noticed that they are not supporting intervention and not supporting foresight. They are describing how variation and selection can build complex new things. Until ID can describe a process that implements foresight and does not require cut and try, it is vacuous.Petrushka
January 27, 2012
January
01
Jan
27
27
2012
07:44 AM
7
07
44
AM
PDT
Nearly every thread has someone asserting that ID says nothing about the attributes or behavior of the designer, and must not be asked to do so.
The first part of that sentence is mostly correct, with the exception of the "I" in ID. The second part - "must not be asked?" Or what? William Dembski will float through my bedroom window at night and haunt me with his arms stretched out like in his old Wikipedia picture? I think you have personally asked the question about a million times. Have your posts been deleted? You can ask as many times you want. But your own words quoted above indicate that you already know the answer. It's actually a very simple answer. Why you would want to ask a question over and over when the answer is very simple and you provide it yourself in the same sentence as the question is beyond me. Both the question and your wording of it suggest that you don't understand what you are asking about and are willfully determined not to, so why even ask? But let's put it to the test anyway and see if I get censored: Someone, please tell me what ID tells us about the attributes and behavior of the designer? (Assuming, as one should not, that ID refers specifically to the design of one thing and/or one specific designer.)ScottAndrews2
January 27, 2012
January
01
Jan
27
27
2012
06:58 AM
6
06
58
AM
PDT
I think I’ll hold to the view that an increment is an increment, whatever its nature, type, or source.
The English word covers both meanings, so that's not inaccurate. In this case the word has a specific meaning in a certain context and a drastically different meaning in another. Even then I wouldn't split hairs, but when we're talking specifically about how biological evolution operates and someone compares the increments of that evolution to the addition of new functions in software the hair-splitting is called for. If someone asks "How does evolution make bird wings" someone else might correctly point out that evolution is about genetic changes, not specifically how entire new functions get created and added. That's why it's astounding that, just to make a point, someone arguing the other side would directly compare not the result of evolution but its actual mechanical process to the adding of entirely new functions.ScottAndrews2
January 27, 2012
January
01
Jan
27
27
2012
06:01 AM
6
06
01
AM
PDT
Petrushka,
But the very word design is a metaphor or an analogy.
I read your posts with interest because I think they exhibit rationale and good thinking. However, honestly, I think this phrase shows a big weakness in your argumentation. IMO, you are locking yourself up from understanding the unique role of choice contingent causality in nature. A whole lot of reality cannot be adequately understood without it.Eugene S
January 27, 2012
January
01
Jan
27
27
2012
05:27 AM
5
05
27
AM
PDT
Onlookers: I simply note that increments face a key threshold barrier, complexity in the context of specificity. If the "increments" in question are functionally specific per a reasonable objective warrant, and are beyond 500 - 1,000 bits, blind chance and mechanical necessity cannot credibly -- per empirical observation -- account for such. Design can. Most significant software or editorial changes to works of consequence pass that threshold. Similarly, OOL requires at least 100,000 bits of prescriptive info de novo in a code, and novel body plans 10 - 100 millions. If that is the "step size" of the "increments," that is tantamount to saying: materialist magic. The mocking nom de net is showing a sock puppet character this morning. KFkairosfocus
January 27, 2012
January
01
Jan
27
27
2012
04:46 AM
4
04
46
AM
PDT
P: Re: I see no 747 aircraft in the biological world. I see no CPUs, no software. We see birds and bats that put the 747 to pale, we see ribosomes that are NC machine factories executing digitally coded prescriptive, algorithmic info strings, we see the mRNA and DNA tapes that store digitally coded data. Your response is amazing, utterly and inadvertently revealing! GEM of TKIkairosfocus
January 27, 2012
January
01
Jan
27
27
2012
04:40 AM
4
04
40
AM
PDT
SA well said. P, nope that old "analogies breakdown" red herring does not hack it in this context. SA is pointing tot he key dis-analogy, between what Darwin's mechanisms [as updated] would have to do to have probabilistically meaningful steps, and what we know algorithms and code to implement do. Please respond on-point.KFkairosfocus
January 27, 2012
January
01
Jan
27
27
2012
04:37 AM
4
04
37
AM
PDT
Onlookers, watching the exchange between GP and Ch, is enough to tell me that it will be all but impossible for Ch to acknowledge some fairly obvious things: 1 --> There are three well known causal patterns in the empirical world, chance necessity and/or agency. Way back in 1970, in fact Monod wrote a book that in the English version, bore the title "Chance and Necessity" as in, design need not apply. This last echoes a discussion that goes back to Plato in The Laws Bk X, on the three causal factors. 2 --> Each of these has characteristic signs, and the three may be present in a situation, so per aspect we can tease out relevant factors. Necessity is marked by lawlike necessity tracing to blind mechanical forces. 3 --> A dropped heavy object near earth falls at 9.8N/kg initial acceleration, reflecting gravity, a force of mechanical necessity. This is reliable and of low contingency. That's how we identified a natural law at work, and an underlying force of gravitation. 4 --> If the object happens to be a fair die, the outcome of falling has another aspect, high contingency: it tends to tumble and come to read a value from 1 - 6, in accord with the uncorrelated forces and trends at work, and hitting on the eight corners and twelve edges, so leading to in effect a chance outcome. 5 --> Anything that has that sort of high contingency, statistically distributed occurrence in accord with a random model is similarly chance driven. In experiments, there is usually an irreducible scatter due to chance effects, which as to be filtered off to identify mechanical necessity acting. Already, two prongs of the ID explanatory filter are routinely in action in scientific contexts! 6 --> Now, if we look at how say a blocks/plots-treatments-controls experiment is done, we see that we have ANOVA at work, and we are looking to identify the effects of the ART-ificial intervention of manipulating a treatment in blocks and degrees. We want to detect the effects of designed inputs, chance scatter and underlying laws of necessity. Again, routine in science. 7 --> More generally ART often leaves characteristic traces, such as functionally specific complex organisation and associated information. The text of posts in this thread is sufficient to show this -- dots are organised in ways that bear info, which is functionally specific and complex. 8 --> Routinely, we do not infer to chance causing the highly contingent outcome, but design. And, it has been shown over and over, that the resources of the solar system or the observed cosmos are inadequate to explain a specific, complex outcome on chance and necessity without design. 9 --> In short, as has been pointed out over and over and willfully ignored or brushed aside:
a: Origin of Ligfe has to explain the rise of the language, codes, algorithms and machines involved in a von Neumann self replicator joined to a metabolising automaton b: absent such, no root for the darwinian tree of life c: in addition, this is the first body plan, and it shows, abundantly, how the resources of chance and necessity on the gamut of solar system or observed cosmos are hopelessly inadequate. d: best explanation for OOL, design, i.e design is on the table here, and of course again at the level of a cosmos that is fine tuned for life. e: despite howls and objections, design is a credible candidate possible explanation and must be reckoned with in the context of explaining OOL and OO body plans etc, as such exhibit FSCO/I which is well known to be produced by design. And ONLY observed to be done by design, ga'S being a case in point. f: So, to arbitrarily impose the Lewontinian a priori materialism objection is to beg the question. g: When we must explain more complex body plans, we face much higher levels of FSCO/I involved, i.e. we now face having to account for de novo origin of organ systems and life forms, where it is only the chance variation in CV + NS --> DWM, aka Evolution, that can be a source of information. h: This is predictably object5ed to but the point is that the selection part is in effect that some inferior varieties die off in competition with the superior ones, so INFO IS SUBTRACTED by NS. i: but believe it or not, some still want to assert -- happens at UD all the time -- that subtraction equals addition. Own-goal. Oops. j: So, we have to explain millions of bits worth of functionally specific complex info, on chance variation, in a context where until the millions of bits are in place, the systems are non-functional. k: the bird lung is emblematic: a bellows lung works, and a one way flow lung with sacs works too, but intermediates do not work and are lethal in minutes. l: So, on observed -- actually observed -- cases, how do we get from the one to the other, or how do they arise independently in the first place, by CV + DRS, i.e. differential reproductive success --> DWM (descent with mod, unlimited)? m: Failing this, what cases of such observed macroevo giving rise to body plan components like this, do we have? n: failing such, what do we have that empirically shows CV + complex Functional selection [not by inspection for hitting an easy-reach target like three-letter words] off superior performance + replication with incremental change --> FSCO/I, where we are not playing around within an existing island of function? o: NOTHING p: In short, we have every reason to see that FSCO/I -- rhetorical objections notwithstanding -- is a good, empirically reliable sign of design
10 --> Here comes the "how dare you appeal to agency" rebuttal. ANSWERED. 11 --> So, now we have a good sign that reliably points to cases of design, and we have circumstances that point to non-human designers and non-dinosaur designers, etc. So, why not simply take that seriously? 12 --> All of t5his has been pointed out, explained and even demonstrared oer and over again, but ther eis a clear roadblock. It is ideological, not scientific: a priori materialism. 13 --> So, we are back full circle to where Johnson was in 1997 when he pointed out what has gone wrong with origins science thusly:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
14 --> "the materialism comes first . . ." 15 --> So until the proud tower collapses in ignominy, we will keep on seeing the sort of arguments in a materialist circle that will not listen to evidence, that we see year after year here at UD. 16 --> But, in the meanwhile, let us insist: until you show the capability of darwinian mechanisms to achieve body plan origin level results, the whole is an ideological enterprise once we move beyond things as minor as finch beak sizes or the difference between North American Elk and European red deer. Given the interbreeding in New Zealand, are they still classified as separate species? GEM of TKIkairosfocus
January 27, 2012
January
01
Jan
27
27
2012
04:33 AM
4
04
33
AM
PDT
champignon: Maybe my assumption is right. Maybe my readers don't understand the difference between: "1. gpuccio has decided that feature X could not have evolved." and: "1. gpuccio has made detailed arguments to demonstrate that no convincing explanation has ever been given of how feature X could have evolved. He has many time pointed to those arguments and to those posts. And some specific reader of mine has never commented on that. Moreover, if that specific reader about whom I am, according to some, making assumptions (maybe justified), were kind enough to read what I have written, he would probably (but sometimes I am too optimistic)understand that an explicit calculation of the probabilities of a random event in relation to the probabilistic resources of the system that is supposed to have generated it is the basis to scientifically analyze any model implying the random generation of that event; and that the explicit proposal of adding some explicit necessity mechanism, like NS, to the model can be quantitatively integrated in the model, and still allow quantitative evaluations of the global probabilities of the final event, as gpuccio has shown in those long, and evidently not read by some, posts. So, you go on with your statements, I will go on with my assumptions. We live in a free world (more or less).gpuccio
January 27, 2012
January
01
Jan
27
27
2012
03:49 AM
3
03
49
AM
PDT
Ch I have already taken your argument apart step by step, pointing out the strawmen. KFkairosfocus
January 27, 2012
January
01
Jan
27
27
2012
03:48 AM
3
03
48
AM
PDT
34.1.3.1.11 Petrushka You wanted examples. Today the Western culture is becoming increasingly homosexual. We are all drifting towards hell. Instead of tolerance to this phenomenon (for want of a better word), schools and universities should educate people that this is psychiatric disorder rather than the norm. Take a medical dictionary that was printed say in the 1950-s and compare (I may be wrong as regards the dates because I don't know maybe the situation in the US was already bad then, so I guess you want a foreign reference). I hope you will see there quite adequate descriptions of the case as perversion, a medical case. I ask you, is it possible to openly discuss this issue in class in any US university today without fear of the consequences (at least in the form of a public disclaimer in boldface on the door to your university room)?Eugene S
January 27, 2012
January
01
Jan
27
27
2012
02:24 AM
2
02
24
AM
PDT
Petrushka: As you should know, I have never said that. Just the opposite. It is perfectly correct, however, that it is not necessary to know anything about those things to make a design inference.gpuccio
January 26, 2012
January
01
Jan
26
26
2012
10:46 PM
10
10
46
PM
PDT
You make it sound like some sort of weird hooded order that meets in catacombs. Are you sure you’ve read the FAQ?
It's not hidden. It's out in the open. Nearly every thread has someone asserting that ID says nothing about the attributes or behavior of the designer, and must not be asked to do so.Petrushka
January 26, 2012
January
01
Jan
26
26
2012
07:07 PM
7
07
07
PM
PDT
Thank you. I think I'll hold to the view that an increment is an increment, whatever its nature, type, or source. If a new gene is added to a genome, the complement of genes has undergone an increment.If a new function arises, the repertoire of functions has been incremented. And so on. Of course, incremental change is decidedly not the ONLY thing going on, and it doesn't define evolution. But, it seems to me, it can be an important component of evolutionBydand
January 26, 2012
January
01
Jan
26
26
2012
03:40 PM
3
03
40
PM
PDT
Ok. It was asserted that evolution, a process of selected incremental changes, can evolve protein sequences, and in that context it was mentioned that changes to software are also incremental. In this context it cannot be mistaken that incremental changes to one are being compared to incremental changes to the other. Evolution is a change in frequencies of alleles in the gene pool of a population. One could say that it in practice it is genetic variation and selection, but if I say that someone will correct me with this definition. But it doesn't matter. Although evolution is commonly cited to explain why elephants have trunks or why spiders make webs, evolution is (supposedly) not about explaining such functions. It is about the propagation of specific genes, which supposedly, maybe add up over time to those functions. Genes or alleles are the primary increments of evolution. (Other factors, such as environment, may play a role. People grow taller with better nutrition, but elephants don't grow trunks because of better nutrition.) If giraffes descended from tapir-like creatures then the differences between them are the result of a number of incremental genetic changes. How or why they add up that way is an open question, or at least that's what people say when you point out that selection and variation are insufficient. But that's beside the point. Genetic changes are the increment. Any noticeable "incremental" change in software such as a new feature or even a bug fix consists at the very least of the addition of complete complex instructions, and usually more elaborate functions. It may be that the occasional bug fix requires fixing a one-keystroke typo, but one does not develop software by changing single characters. Forget randomness. Even if you know exactly what you're doing you can't write or enhance software by taking existing software and replacing, deleting, or adding a single character. In a nutshell: The incremental changes of biological evolution are genes, not functions. The incremental changes of software are complete instructions and new functions. Ironically, if I say that biological evolution is about the appearance of new functions, I am certain to be corrected. I have said it, and I've been corrected. So if someone arguing for the power of biological evolution to discover new function equates or even compares the incremental changes of such evolution to the appearance of entirely new functions in software either - they think that new functions are the increments of change in biological evolution, which means that they don't understand it well enough to argue for it - they think that software developers write new software by poking at individual bytes - we don't even write plain text that way so I find that hard to believe - or they are begging the question in the most egregious manner, using the assertion that biological evolution produces new functions like software development as evidence of itself.ScottAndrews2
January 26, 2012
January
01
Jan
26
26
2012
02:05 PM
2
02
05
PM
PDT
gpuccio, You are assuming a remarkable stupidity on the part of your readers. Your argument boils down to this:
1. gpuccio has decided that feature X could not have evolved. 2. The probability that the predefined function of X could have been found by a blind search is very low [as everyone, including 'Darwinists', has always agreed]. 3. Therefore, X could not have evolved.
In case the weakness of that argument escapes you, let me elaborate a bit. How to use dFSCI to determine whether feature X could have evolved, according to gpuccio:
1. Compute the 'quantitative functional complexity' of X, otherwise known as the negative log base 2 of the probability of finding the target zone of X using a blind search. Nobody thinks that blind search is responsible for X, but do the computation anyway. If X is so incredibly simple that it actually could have been found by a blind search, the QFC will be low, so drop the claim that it was designed. 2. Look at X and decide that it could not have evolved. 3. Redesignate the QFC as 'dFSCI' and conclude that X did not evolve.
In other words, if you think that X didn't evolve, and if X is not so simple that even a blind search could have found it, then conclude that X didn't evolve. The actual QFC number means nothing, unless you were stupid enough to claim that an extremely simple feature must have been designed. Then, and only then, the number would tell you to drop your claim of design, as if you weren't smart enough to figure that out without the calculation. So a low QFC number can cause you to drop your claim of design, but a high QFC number can't tell you that something was designed unless you have already concluded that it could not have evolved. The number itself means absolutely nothing in that case.champignon
January 26, 2012
January
01
Jan
26
26
2012
01:52 PM
1
01
52
PM
PDT
I have the same problem with design that designers have with evolution. How did it start? Who was the first designer who foresaw the possibility of life?
Fair enough. How does that advance evolutionary theory?
I can understand why this question is forbidden in the ID movement. But it strikes me as a kind of censorship.
You make it sound like some sort of weird hooded order that meets in catacombs. Are you sure you've read the FAQ? That's like saying that discussion of the effects of weightlessness on humans is forbidden in astronomy. It's not forbidden. It's not even completely irrelevant. But if you persistently assert that it's an astronomical question than people will rightfully ask if you know what astronomy is.ScottAndrews2
January 26, 2012
January
01
Jan
26
26
2012
01:24 PM
1
01
24
PM
PDT
then please enlighten me - I seem to be missing the thrust of your argument.Bydand
January 26, 2012
January
01
Jan
26
26
2012
01:22 PM
1
01
22
PM
PDT
Actually, drug companies do use directed evolution to find new molecules. Computers might be cheaper, but they are going to use GAs, even if some humans have a temporary advantage. Humans used to beat computers at chess and checkers. I confused by your interpretation of "utility." I'm not thinking of simple targets like protein design. I'm thinking of differential reproductive success, most of which is determined by variations in regulatory networks. But it's also possible that the most beautifully structured protein may not be the most useful. It depends on context, and for living things, the context is reproductive success. I have the same problem with design that designers have with evolution. How did it start? Who was the first designer who foresaw the possibility of life? I can understand why this question is forbidden in the ID movement. But it strikes me as a kind of censorship.Petrushka
January 26, 2012
January
01
Jan
26
26
2012
12:58 PM
12
12
58
PM
PDT
They are blind, but they grope the nearby space efficiently.
No one questions the gropability of nearby space. It's been groped before. But your own repeated question - how do we know that the spaces aren't connected - indicates that you don't know that they are. That's the whole question. 'Outlining the concept of an algorithm' just doesn't mean anything. People do that all the time where I work. Then they jump to another project and leave someone else with the concept they outlined.ScottAndrews2
January 26, 2012
January
01
Jan
26
26
2012
12:52 PM
12
12
52
PM
PDT
champignon: dFSCI, as you compute it, considers only blind search. It does not consider evolution. There’s a very easy way to see this: come up with a formula for the probability of hitting a target by blind search. Express it in bits by taking the negative log base 2. What do you get? Exactly the same formula you presented for computing dFSCI. By considering only blind search, you are assuming that the probability of evolution is zero. But that’s the very question we’re trying to answer! I can onlt restate what I have said, and you seem not to understand: "Point 1 is wrong, because to affirm dFSCI we must have considered all known necessity explanations, and found them laking." I have considered all known necessity explanations and found them lacking. If you had the patience to read my posts in the other thread, we could maybe discuss, and not only waste our time. I understand it is a deep concept for you, buy what I am saying is that the computation of dFSCI must be accompanied by an anlysis of known necessity explanations. If those explanations do not exist, or are found to be wrong, then the random origin remains the only alternative explanation, and it can be falsified by dFSCI. But perhaps I am asking too much from your understanding: such complex concepts... But then the “quantitative functional complexity” part doesn’t do anything. All the work is done by the purported “empirical falsification of proposed necessity mechanisms”. The dFSCI number is irrelevant. Have you lost your mind? The falsification of necessity mechanisms rules out necessity mechanisms. dFSCI falsifies the random origin explanation. Again, what is wrong with you? How many times must I say trivial things? I don't pretend that you agree, but why not understand what I am saying? But the only thing you are quantifying is the probability of hitting a predefined target using blind search. Evolution is not a blind search, and it does not seek a predefined target. What I am quantifying is the probability that what happened happened in a random way. Again, it is not difficult. The neo darwinian algorithm relies on RV to generate new information. If you want to call it a blind search, be my guest. The problem of the predefined target is simply nonsense. I have asnwered it two or three times in the last few days, referring you to the previous asnwers, and you still repeat it like a mantra. If you want to waste time, it's your choice. I am here to discuss with people who are able to discuss, and want to do it. Nobody in the world thinks that the ribosome or the eye are the products of “simple RV”, without selection. You are answering a question that nobody is asking. dFSCI changes nothing. I am referring to those results for which there is no necessity explanation. Like the robosome, the eye, and basic protein domains. So we knoe they could not originate by simple RV, and that there is no credible necessity explanation for them. Therefore we infer design. Can you see how useful dFSCI is? And here is post 40.2 without the typos. I hope you are happier now: "champignon: dFSCI is a realiable indicator of design. What you seem to forget is that affirming that an object exhibits dFSCI, and therefore allows a design inference, implies, as clearly stated in my definition, that no known algorithm exists that can explain the observed function, not even coupled to reasonable random events. That’s why evaluating dFSCI and making the design inference is more complex, and complete, than simply calculating the target space/search space ratio. It includes also a detailed analysis of any explicitluy proposed necessity explanation of what we observe. Therefore, if correctly done, the evaluation of dFSCI allows the design inference, and answers your objections, because affirming dFSCI equals to say: we know no credible way the observed function could have evolved. As already said, I have analyzed in detail the credibility of the neo darwinian algorithm, including its necessity component, and found it completely lacking. Therefore, my belief that protein domains exhibit true dFSCI, and allow a design inference, is well supported."gpuccio
January 26, 2012
January
01
Jan
26
26
2012
12:43 PM
12
12
43
PM
PDT
I realize that my tone is switching to cranky, so I'm going to have to drop this soon.
If modern living things have efficient methods for locating function (as Shapiro asserts) it is because search processes have been refined over billions of years.
So you say, begging the question. Wait - what? You didn't know that 'modern living things [us?] have efficient methods for locating function' until Shapiro said so? Apparently there is no bottom to this.
The most efficient way to design biological molecules will always be found in chemistry itself.
Okay, so why do they have a team of gamers working on protein inhibitors for the spanish flu? Why don't they just use chemistry instead? Excuse me, chemistry, may we please have some protein inhibitors for the spanish flu? Thank you.
The folding game doesn’t even address the most important design problem, that of utility. And it doesn’t address the most common and powerful kind of evolution, that of regulation.
The utilities they are targeting are relatively simple. But you are setting aside the massive point that they are accomplishing what GAs could not. This is your repeated assertion that evolution is superior to intelligence (except that evolution is intelligence - whatever, it's everything) put to the test, and intelligence is winning. This is a real-life demonstration of the opposite of what you keep saying. Why do I have no doubt that you'll keep saying it anyway?ScottAndrews2
January 26, 2012
January
01
Jan
26
26
2012
12:27 PM
12
12
27
PM
PDT
Just to make myself clear, what causes you to doubt that evolution hasn't found efficient search algorithms? Shapiro has outlined his conception of such algorithms. They involve chemistry, not magic. They do not have foresight. They simply employ observable kinds of mutations and genomic change that maximize the potential for finding useful stuff. They are blind, but they grope the nearby space efficiently. Which is why the ID community seems to awakened to the fact that Shapiro may not be the ally they thought he was.Petrushka
January 26, 2012
January
01
Jan
26
26
2012
12:20 PM
12
12
20
PM
PDT
When we present the problem of designing a protein as a cipher it follows that our brains do not process the problem particularly well. Few or none of us are wired to process numbers that way. But just because a problem poses the same complexity as a cipher does not mean that it must be processes as such.
Sure there is a spin. You assign magical attributes to a never observed entity, and biologists assign attributes to evolution. When the behavior of living things is studied in sufficient detail, the attributes are found. As with Lenski and Thornton. If modern living things have efficient methods for locating function (as Shapiro asserts) it is because search processes have been refined over billions of years. As for the folding game, it still takes enormous amounts of time to do something chemistry does in the blink of an eye. The most efficient way to design biological molecules will always be found in chemistry itself. The folding game doesn't even address the most important design problem, that of utility. And it doesn't address the most common and powerful kind of evolution, that of regulation.Petrushka
January 26, 2012
January
01
Jan
26
26
2012
12:05 PM
12
12
05
PM
PDT
I’m aware that analogies have limits.
No one is saying that you are not. You are saying that comparing the increments of evolution to new features in software is within those limits. It's your understanding of what you compare because you compare them that I'm questioning, not whether you understand analogies.
So why not give up the design analogies, and critique evolution based on whether it posits any chemistry that cannot happen?
The comparisons between certain features within biology and those in human-designed systems are not analogies. Let's say I agree with you. Here goes: Evolution does not posit any chemistry that is proven impossible. And I'm 75% sure I actually mean it, even if I'm saying it for the sake of argument. If everyone agreed with that statement, how would that scientifically advance evolution? Would it not join a very, very long list of things - even contradictory things - that have not been proven impossible?ScottAndrews2
January 26, 2012
January
01
Jan
26
26
2012
11:48 AM
11
11
48
AM
PDT
I'm aware that analogies have limits. are you aware of that? Why is the debate over ID littered with reference to human made designs or to human verbal abilities if analogies are forbidden? If ID advocates are willing to forgo all analogies and stick entirely to what chemistry can and cannot do, I'll go that route. But the very word design is a metaphor or an analogy. I see no 747 aircraft in the biological world. I see no CPUs, no software. I see chemistry. So why not give up the design analogies, and critique evolution based on whether it posits any chemistry that cannot happen?Petrushka
January 26, 2012
January
01
Jan
26
26
2012
11:18 AM
11
11
18
AM
PDT
Surely “incremental change” means change by addition or accretion. It carries no baggage that I can see of the size or type of each addition.
Then I'm afraid you don't understand biological evolution either. I'm not trying to paint myself as an expert. I'm not even a biologist. But I do know that even the vaguest definitions of evolution, entirely separated from mechanics, describe exactly what the increments of change are.ScottAndrews2
January 26, 2012
January
01
Jan
26
26
2012
11:08 AM
11
11
08
AM
PDT
1 2 3 4 5 6 14

Leave a Reply