Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 11: Borel’s Infinite Monkeys analysis and the significance of the log reduced Chi metric, Chi_500 = I*S – 500

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 (Series)

Emile Borel, 1932

Emile Borel (1871 – 1956) was a distinguished French Mathematician who — a son of a Minister — came from France’s Protestant minority, and he was a founder of measure theory in mathematics. He was also a significant contributor to modern probability theory,  and so Knobloch observed of his approach, that:

>>Borel published more than fifty papers between 1905 and 1950 on the calculus of probability. They were mainly motivated or influenced by Poincaré, Bertrand, Reichenbach, and Keynes. However, he took for the most part an opposed view because of his realistic attitude toward mathematics. He stressed the important and practical value of probability theory. He emphasized the applications to the different sociological, biological, physical, and mathematical sciences. He preferred to elucidate these applications instead of looking for an axiomatization of probability theory. Its essential peculiarities were for him unpredictability, indeterminism, and discontinuity. Nevertheless, he was interested in a clarification of the probability concept. [Emile Borel as a probabilist, in The probabilist revolution Vol 1 (Cambridge Mass., 1987), 215-233. Cited, Mac Tutor History of Mathematics Archive, Borel Biography.]>>

Among other things, he is credited as the worker who introduced a serious mathematical analysis of the so-called Infinite Monkeys theorem (just a moment).

So, it is unsurprising that Abel, in his recent universal plausibility metric paper, observed  that:

Emile Borel’s limit of cosmic probabilistic resources [c. 1913?] was only 1050 [[23] (pg. 28-30)]. Borel based this probability bound in part on the product of the number of observable stars (109) times the number of possible human observations that could be made on those stars (1020).

This of course, is now a bit expanded, since the breakthroughs in astronomy occasioned by the Mt Wilson 100-inch telescope under Hubble in the 1920’s. However,  it does underscore how centrally important the issue of available resources is, to render a given — logically and physically strictly possible but utterly improbable — potential chance- based event reasonably observable.

We may therefore now introduce Wikipedia as a hostile witness, testifying against known ideological interest, in its article on the Infinite Monkeys theorem:

In one of the forms in which probabilists now know this theorem, with its “dactylographic” [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel‘s 1913 article “Mécanique Statistique et Irréversibilité” (Statistical mechanics and irreversibility),[3] and in his book “Le Hasard” in 1914. His “monkeys” are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel’s image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[4]

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Let us emphasise that last part, as it is so easy to overlook in the heat of the ongoing debates over origins and the significance of the idea that we can infer to design on noticing certain empirical signs:

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Why is that?

Because of the nature of sampling from a large space of possible configurations. That is, we face a needle-in-the-haystack challenge.

For, there are only so many resources available in a realistic situation, and only so many observations can therefore be actualised in the time available. As a result, if one is confined to a blind probabilistic, random search process, s/he will soon enough run into the issue that:

a: IF a narrow and atypical set of possible outcomes T, that

b: may be described by some definite specification Z (that does not boil down to listing the set T or the like), and

c: which comprise a set of possibilities E1, E2, . . . En, from

d: a much larger set of possible outcomes, W, THEN:

e: IF, further, we do see some Ei from T, THEN also

f: Ei is not plausibly a chance occurrence.

The reason for this is not hard to spot: when a sufficiently small, chance based, blind sample is taken from a set of possibilities, W — a configuration space,  the likeliest outcome is that what is typical of the bulk of the possibilities will be chosen, not what is atypical.  And, this is the foundation-stone of the statistical form of the second law of thermodynamics.

Hence, Borel’s remark as summarised by Wikipedia:

Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

In recent months, here at UD, we have described this in terms of searching for a needle in a vast haystack [corrective u/d follows]:

let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system’s 10^57 atoms would undergo ~ 10^87 “chemical time” states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let’s do an illustrative haystack calculation:

 Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.

With this in mind, we may now look at the Dembski Chi metric, and reduce it to a simpler, more practically applicable form:

m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]. χ is “chi” and ϕ is “phi”

n:  To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from.)

o: So, since 10^120 ~ 2^398, we may do some algebra as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

Chi = – log2(2^398 * D2 * p), in bits

Chi = Ip – (398 + K2), where log2 (D2 ) = K2

p: But since 398 + K2 tends to at most 500 bits on the gamut of our solar system [our practical universe, for chemical interactions! (if you want , 1,000 bits would be a limit for the observable cosmos)] and

q: as we can define a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

Chi_500 =  Ip*S – 500, in bits beyond a “complex enough” threshold

(If S = 0, Chi = – 500, and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: A string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.)

r: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was intelligently designed. (For instance, no-one would dream of asserting seriously that the English text of this post is a matter of chance occurrence giving rise to a lucky configuration, a point that was well-understood by that Bible-thumping redneck fundy — NOT! — Cicero in 50 BC.)

s: The metric may be directly applied to biological cases:

t: Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

u: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

v: Therefore, we have at least one possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] .

But, but, but . . . isn’t “natural selection” precisely NOT a chance based process, so doesn’t the ability to reproduce in environments and adapt to new niches then dominate the population make nonsense of such a calculation?

NO.

Why is that?

Because of the actual claimed source of variation (which is often masked by the emphasis on “selection”) and the scope of innovations required to originate functionally effective body plans, as opposed to varying same — starting with the very first one, i.e. Origin of Life, OOL.

But that’s Hoyle’s fallacy!

Advice: when you go up against a Nobel-equivalent prize-holder, whose field requires expertise in mathematics and thermodynamics, one would be well advised to examine carefully the underpinnings of what is being said, not just the rhetorical flourish about tornadoes in junkyards in Seattle assembling 747 Jumbo Jets.

More specifically, the key concept of Darwinian evolution [we need not detain ourselves too much on debates over mutations as the way variations manifest themselves], is that:

CHANCE VARIATION (CV) + NATURAL “SELECTION” (NS) –> DESCENT WITH (UNLIMITED) MODIFICATION (DWM), i.e. “EVOLUTION.”

CV + NS –> DWM, aka Evolution

If we look at NS, this boils down to differential reproductive success in environments leading to elimination of the relatively unfit.

That is, NS is a culling-out process, a subtract-er of information, not the claimed source of information.

That leaves only CV, i.e. blind chance, manifested in various ways. (And of course, in anticipation of some of the usual side-tracks, we must note that the Darwinian view, as modified though the genetic mutations concept and population genetics to describe how population fractions shift, is the dominant view in the field.)

There are of course some empirical cases in point, but in all these cases, what is observed is fairly minor variations within a given body plan, not the relevant issue: the spontaneous emergence of such a complex, functionally specific and tightly integrated body plan, which must be viable from the zygote on up.

To cover that gap, we have a well-known metaphorical image — an analogy, the Darwinian Tree of Life. This boils down to implying that there is a vast contiguous continent of functionally possible variations of life forms, so that we may see a smooth incremental development across that vast fitness landscape, once we had an original life form capable of self-replication.

What is the evidence for that?

Actually, nil.

The fossil record, the only direct empirical evidence of the remote past, is notoriously that of sudden appearances of novel forms, stasis (with some variability within the form obviously), and disappearance and/or continuation into the modern world.

If by contrast the tree of life framework were the observed reality, we would see a fossil record DOMINATED by transitional forms, not the few strained examples that are so often triumphalistically presented in textbooks and museums.

Similarly, it is notorious that fairly minor variations in the embryological development process are easily fatal. No surprise, if we have a highly complex, deeply interwoven interactive system, chance disturbances are overwhelmingly going to be disruptive.

Likewise, complex, functionally specific hardware is not designed and developed by small, chance based functional increments to an existing simple form.

Hoyle’s challenge of overwhelming improbability does not begin with the assembly of a Jumbo jet by chance, it begins with the assembly of say an indicating instrument on its cockpit instrument panel.

The D’Arsonval galvanometer movement commonly used in indicating instruments; an adaptation of a motor, that runs against a spiral spring (to give proportionality of deflection to input current across the magnetic field) which has an attached needle moving across a scale. Such an instrument, historically, was often adapted for measuring all sorts of quantities on a panel.

(Indeed, it would be utterly unlikely for a large box of mixed nuts and bolts, to by chance shaking, bring together matching nut and bolt and screw them together tightly; the first step to assembling the instrument by chance.)

Further to this, It would be bad enough to try to get together the text strings for a Hello World program (let’s leave off the implementing machinery and software that make it work) by chance. To then incrementally create an operating system from it, each small step along the way being functional, would be a bizarrely operationally impossible super-task.

So, the real challenge is that those who have put forth the tree of life, continent of function type approach, have got to show, empirically that their step by step path up the slopes of Mt Improbable, are empirically observable, at least in reasonable model cases. And, they need to show that in effect chance variations on a Hello World will lead, within reasonable plausibility, to such a stepwise development that transforms the Hello World into something fundamentally different.

In short, we have excellent reason to infer that — absent empirical demonstration otherwise — complex specifically functional integrated complex organisation arises in clusters that are atypical of the general run of the vastly larger set of physically possible configurations of components. And, the strongest pointer that this is plainly  so for life forms as well, is the detailed, complex, step by step information controlled nature of the processes in the cell that use information stored in DNA to make proteins.  Let’s call Wiki as a hostile witness again, courtesy two key diagrams:

I: Overview:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA

II: Focusing on the Ribosome in action for protein synthesis:

The Ribosome, assembling a protein step by step based on the instructions in the mRNA “control tape” (the AA chain is then folded and put to work)

Clay animation video [added Dec 4]:

[youtube OEJ0GWAoSYY]

More detailed animation [added Dec 4]:

[vimeo 31830891]

This sort of elaborate, tightly controlled, instruction based step by step process is itself a strong sign that this sort of outcome is unlikely by chance variations.

(And, attempts to deny the obvious, that we are looking at digital information at work in algorithmic, step by step processes, is itself a sign that there is a controlling a priori at work that must lock out the very evidence before our eyes to succeed. The above is not intended to persuade such, they are plainly not open to evidence, so we can only note how their position reduces to patent absurdity in the face of evidence and move on.)

But, isn’t the insertion of a dummy variable S into the Chi_500 metric little more than question-begging?

Again, NO.

Let us consider a simple form of the per-aspect explanatory filter approach:

The per aspect design inference explanatory filter

 

You will observe two key decision nodes,  where the first default is that the aspect of the object, phenomenon or process being studied, is rooted in a natural, lawlike regularity that under similar conditions will produce similar outcomes, i.e there is a reliable law of nature at work, leading to low contingency of outcomes.  A dropped, heavy object near earth’s surface will reliably fall at g initial acceleration, 9.8 m/s2.  That lawlike behaviour with low contingency can be empirically investigated and would eliminate design as a reasonable explanation.

Second, we see some situations where there is a high degree of contingency of possible outcomes under initial circumstances.  This is the more interesting case, and in our experience has two candidate mechanisms: chance, or choice. The default for S under these circumstances, is 0. That is, the presumption is that chance is an adequate explanation, unless there is a good — empirical and/or analytical — reason to think otherwise.  In short, on investigation of the dynamics of volcanoes and our experience with them, rooted in direct observations, the complexity of a Mt Pinatubo is explained partly on natural laws and chance variations, there is no need to infer to choice to explain its structure.

But, if the observed configurations of highly contingent elements were from a narrow and atypical zone T not credibly reachable based on the search resources available, then we would be objectively warranted to infer to choice. For instance, a chance based text string of length equal to this post, would  overwhelmingly be gibberish, so we are entitled to note the functional specificity at work in the post, and assign S = 1 here.

So, the dummy variable S is not a matter of question-begging, never mind the usual dismissive talking points.

I is of course an information measure based on standard approaches, through the sort of probabilistic calculations Hartley and Shannon used, or by a direct observation of the state-structure of a system [e.g. on/off switches naturally encode one bit each].

And, where an entity is not a direct information storing object, we may reduce it to a mesh of nodes and arcs, then investigate how much variation can be allowed and still retain adequate function, i.e. a key and lock can be reduced to a bit measure of implied information, and a sculpture like at Mt Rushmore can similarly be analysed, given the specificity of portraiture.

The 500 is a threshold, related to the limits of the search resources of our solar system, and if we want more, we can easily move up to the 1,000 bit threshold for our observed cosmos.

On needle in a haystack grounds, or monkeys strumming at the keyboards grounds, if we are dealing with functionally specific, complex information beyond these thresholds, the best explanation for seeing such is design.

And, that is abundantly verified by the contents of say the Library of Congress (26 million works) or the Internet, or the product across time of the Computer programming industry.

But, what about Genetic Algorithms etc, don’t they prove that such FSCI can come about by cumulative progress based on trial and error rewarded by success?

Not really.

As a rule, such are about generalised hill-climbing within islands of function characterised by intelligently designed fitness functions with well-behaved trends and controlled variation within equally intelligently designed search algorithms. They start within a target Zone T, by design, and proceed to adapt incrementally based on built in designed algorithms.

If such a GA were to emerge from a Hello World by incremental chance variations that worked as programs in their own right every step of the way, that would be a different story, but for excellent reason we can safely include GAs in the set of cases where FSCI comes about by choice, not chance.

So, we can see what the Chi_500 expression means, and how it is a reasonable and empirically supported tool for measuring complex specified information, especially where the specification is functionally based.

And, we can see the basis for what it is doing, and why one is justified to use it, despite many commonly encountered objections. END

________

F/N, Jan 22: In response to a renewed controversy tangential to another blog thread, I have redirected discussion here. As a point of reference for background information, I append a clip from the thread:

. . . [If you wish to find] basic background on info theory and similar background from serious sources, then go to the linked thread . . . And BTW, Shannon’s original 1948 paper is still a good early stop-off on this. I just did a web search and see it is surprisingly hard to get a good simple free online 101 on info theory for the non mathematically sophisticated; to my astonishment the section A of my always linked note clipped from above is by comparison a fairly useful first intro. I like this intro at the next level here, this is similar, this is nice and short while introducing notation, this is a short book in effect, this is a longer one, and I suggest the Marks lecture on evo informatics here as a useful contextualisation. Qualitative outline here. I note as well Perry Marshall’s related exchange here, to save going over long since adequately answered talking points, such as asserting that DNA in the context of genes is not coded information expressed in a string of 4-state per position G/C/A/T monomers. The one good thing is, I found the Jaynes 1957 paper online, now added to my vault, no cloud without a silver lining.

If you are genuinely puzzled on practical heuristics, I suggest a look at the geoglyphs example already linked. This genetic discussion may help on the basic ideas, but of course the issues Durston et al raised in 2007 are not delved on.

(I must note that an industry-full of complex praxis is going to be hard to reduce to an in a nutshell. However, we are quite familiar with information at work, and how we routinely measure it as in say the familiar: “this Word file is 235 k bytes.” That such a file is exceedingly functionally specific can be seen by the experiment of opening one up in an inspection package that will access raw text symbols for the file. A lot of it will look like repetitive nonsense, but if you clip off such, sometimes just one header character, the file will be corrupted and will not open as a Word file. When we have a great many parts that must be right and in the right pattern for something to work in a given context like this, we are dealing with functionally specific, complex organisation and associated information, FSCO/I for short.

The point of the main post above is that once we have this, and are past 500 bits or 1000 bits, it is not credible that such can arise by blind chance and mechanical necessity. But of course, intelligence routinely produces such, like comments in this thread. Objectors can answer all of this quite simply, by producing a case where such chance and necessity — without intelligent action by the back door — produces such FSCO/I. If they could do this, the heart would be cut out of design theory. But, year after year, thread after thread, here and elsewhere, this simple challenge is not being met. Borel, as discussed above, points out the basic reason why.

Comments
gpuccio:
Well, my emotional reaction about that are quite different. Those disclaimers were for me one of the meanest things I have ever witnessed.
It seems to me that the meanness or otherwise of the Lehigh disclaimer could be better judged by placing it here:
Department Position on Evolution and "Intelligent Design" The faculty in the Department of Biological Sciences is committed to the highest standards of scientific integrity and academic function. This commitment carries with it unwavering support for academic freedom and the free exchange of ideas. It also demands the utmost respect for the scientific method, integrity in the conduct of research, and recognition that the validity of any scientific model comes only as a result of rational hypothesis testing, sound experimentation, and findings that can be replicated by others. The department faculty, then, are unequivocal in their support of evolutionary theory, which has its roots in the seminal work of Charles Darwin and has been supported by findings accumulated over 140 years. The sole dissenter from this position, Prof. Michael Behe, is a well-known proponent of "intelligent design." While we respect Prof. Behe's right to express his views, they are his alone and are in no way endorsed by the department. It is our collective position that intelligent design has no basis in science, has not been tested experimentally, and should not be regarded as scientific.
PaulT
January 25, 2012
January
01
Jan
25
25
2012
06:50 PM
6
06
50
PM
PDT
Not everything is compatible with evolution, a functional sequence space that cannot be connected incrementally is not compatible with evolution. You guys are on the right track. I just happen to think you are wrong in the characterization of the space.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
04:36 PM
4
04
36
PM
PDT
Yet again, begging the question by comparing the hypothetical to the observed, and in spectacular fashion. And creating a false choice - accept or reject both chemistry and evolution. Why use examples that make the exact opposite of your point? It's not the examples that anyone objects to. They never have even the vaguest relation to the origin of anything. It's the bizarre extrapolation, imagining that the "evolution" of 4500 bases to 218 bases, losing the function of coding for proteins in the process, can tell us where the 4500 bases and the function they had came from. Or where anything came from, or why, or how. It never ceases to amaze me how absolutely anything and everything is a confirmation of evolution, even evolving something into a fraction of itself and losing its function. No wonder there's a mountain of evidence. I don't think it's possible to swim against this current.ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
04:14 PM
4
04
14
PM
PDT
I see nothing common sensical about favoring a science fiction fantasy over observable phenomena. I see nothing sensible about postulating a designer who magically plucks several hundred bit cipher keys out of thin air. If you want a Nobel prize, demonstrate how the hypothetical designer overcomes the big numbers. Produce a theory of design that does not require any subset of evolution. Alternatively, demonstrate that Thornton is wrong.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
04:13 PM
4
04
13
PM
PDT
Evolution works by transitioning through successive functional intermediates, not by exhaustively sampling the search space. The important question is how well-connected the functional space is, not the ratio of target zone to search space.
I'd certainly like to see someone respond to The cipher key analogy. If functional sequences are truly isolated they are mathematically equivalent to cipher keys of equivalent length. I know of no theory that provides intelligence of any finite power to break cipher keys of lengths equivalent to coding sequences. How does the designer do it? In the Lenski experiment, evolution did it with brute force, trying every combination. But of course functions were connectable. My question would be, what evidence is there that function is not connected?Petrushka
January 25, 2012
January
01
Jan
25
25
2012
03:44 PM
3
03
44
PM
PDT
KF, As I have already explained, "X has high dFSCI" does not mean "X could not have evolved". All that "X has high dFSCI" means is that "the predefined function of X could not be found in a reasonable time by a completely random blind search." Evolution doesn't look for single predefined functions, and it doesn't proceed by blind search. Thus dFSCI tells us nothing about whether X could have evolved. Before the introduction of dFSCI, the question was "Could X have evolved, or is it designed?" After the introduction of dFSCI, the question is "Could X have evolved, or is it designed?" dFSCI has contributed nothing to the discussion. It is an irrelevant metric.champignon
January 25, 2012
January
01
Jan
25
25
2012
03:27 PM
3
03
27
PM
PDT
That's what Prigogine would have said. He called Darwin the greatest chemist in the world in his Nobel lecture, if I remember rightly. However, IMO that is an overstatement. As others have pointed out since Prigogine's time, when one bets against evolution, they bet for statistics and common sense. Prigogine's theory fails to explain the emergence of control. It may seem that Prigogine's or any other self-organisation theory does away with the mystery of life. But that is only the first impression.Eugene S
January 25, 2012
January
01
Jan
25
25
2012
03:26 PM
3
03
26
PM
PDT
It is very much similar to accusing coloured people in the sixties of being paranoic for believing they were not treated fairly.
what exactly would you teach that is currently forbidden? Please be specific. Give us a three or four sentences statement of things that are currently not allowed, but which need to be said.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
03:16 PM
3
03
16
PM
PDT
Petrushka, I'll bet even more money that if you re-read my post, you'll find I never bet against learning anything. But what do you mean by "progress?" You suspect, as many do, that all these things come from darwinian evolution. If they did, then learning more about how they did is progress. If they did not, then it is impossible to "progress" toward learning how they did. Whenever you use the word "progress" in that sense you reveal that to you, the future of scientific discovery is a foregone conclusion. Somehow you magically already know where it's going to lead. That's the only way that you could call a step in any direction "progress."ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
03:11 PM
3
03
11
PM
PDT
Ch You are leaving out some crucial steps, and so setting up a strawman that you then knock over. This, in spite of great pains taken to be clear. GP has nothing to retract, and is quite correct: 1 --> Function can be objectively identified, in relevant cases. 2 --> So can complexity. 3 --> So can configurational specificity of function. 4 --> So can digital codes. 5 --> So, then can be dFSCI, which is in fact commonly observed, e.g. posts in this blog. 6 --> It is observed that some functions can be reached by chance, e.g. the random text generation cases up to 24 ASCII characters. 7 --> However, as just indicated, these tend to be simple in the relevant sense, well within the sort of limits that have been identified for our solar system or the observed cosmos. Practically and simply, 500 - 1,000 bits. 8 --> digitally coded, functionally specific, complex information is actually commonly observed, e.g. posts in this blog, the Internet, libraries etc etc. In many of these cases we separately know the cause of origin. 9 --> In all these known cases, the cause of dFSCI is intelligence. There are no credible counter-examples. (The GA case is not a counter example for reasons pointed out above and elsewhere, over and over again.) 10 --> On fairly standard analysis, we can see why cases of such dFSCI will come from narrow zones T, in much wider spaces of the possible combinations of elements. So much so that on needle in haystack grounds, to get to zones T by chance based random samples of eh domain W, will be maximally improbable. (Hill-climbing algorithms and processes that operate within islands of function as just outlined, will be irrelevant, e.g. GAs.) 11 --> In short, dFSCI is a strong INDUCTIVE sign of design, and can be taken as a reliable sign of design, subject to the usual provisionality of inductive inferences in science and generally. 12 --> to overturn this, all that is needed is to provide a solid counter-example. Just as, to overturn the laws of thermodynamics, all that is needed is a solid counter example, and just as right now it looks possible -- not yet final -- that Einstein's c as universal speed limit postulate just might have met its counter example. 13 --> So, your objection above is a strawman error. GEM of TKIkairosfocus
January 25, 2012
January
01
Jan
25
25
2012
02:56 PM
2
02
56
PM
PDT
Regarding science education in schools - I don't know what everyone else's experience was. There was some discussion of the scientific method, but nothing nearly on the level of what you described. Not even in the ballpark. 90% of science was memorizing stuff related to science. As for my specific, strongly-worded charges against science educators in this country: Appeal to authority - it's stated over and over that most scientists are certain that this is where biological diversity came from. It's not wrong to state that if it's true, but that's what they lead with. It tells students up front that they don't need to critically analyze any of the weak evidence to follow, because really smart people already did that for them. Exaggeration and misinformation - Jonathan Wells has this covered quite well in his examinations and findings from textbook contents. Here are some specifics. What if his take is completely slanted? They're still teaching Miller-Urey, Archaeopteryx, Haeckel's embryos, and the peppered moths. One can't help but wonder why they can't find something better to fill those pages with. What about an explicit directive to withhold evidence and avoid critical examination? It sounds too twisted to be true. But What does the NCSE say on their very own web site? Some text from proposed Oklahoma legislation, as quoted directly in the NCSE's own press release:
The bill also provides that teachers "may use supplemental textbooks and instructional materials to help students understand, analyze, critique, and review scientific theories in an objective manner." This bill does not propose that schools teach creationism or intelligent design, rather, it is the intent to foster an environment of critical thinking in schools including a scientific critique of the theory of evolution.
What's astounding is not that the NCSE calls this "anti-evolution." It's that they don't even see the need to say why they do. They simply imply that the wrongness of analyzing and critiquing a scientific theory is self-evident. If critique of a scientific theory is evidently anti-science, then in what context would any contradictory evidence be raised? In what context may a science teacher state that no one knows how a single protein domain might have evolved? What you have reasonably acknowledged as accurate they would brand unmentionable. Could that information affect a student's perception of the theory? It might. It should. But students are to be intentionally and carefully kept ignorant of such knowledge unless they seek it outside of the classroom. That is an explicit directive to withhold evidence and avoid critical examination. Here is there published list of supporters, beginning with the AAAS, publishers of Science. That's right, the publishers of Science fund efforts to oppose permitting critique of consensus science in the classroom. I realize the apparent contradiction as I accuse educators of withholding information while citing legislation that proposes sharing it. But look at where the opposition comes from. I'm not aware that even educators themselves are specifically opposed as a group to meaningful teaching of science. But the opposition to it is real, and is supported by associations of mainstream scientists. The NCSE does not have the power to dictate the standards of science education, but it is funded to speak on behalf of mainstream science and does so with the consent of the community. That they don't win every battle does not minimize how screwed-up it is that they are fighting it. The concept of consensus loses validity when the community seeks to ensure that students are not taught the value of questioning that consensus, and even enters legal battles to ensure that they don't. By your own standards, which I certainly agree with, such people should be sent to British schools for remedial education rather than influencing the science education of others. As a disclaimer of my own, I'm fully on board with not teaching ID in public schools. I don't think it's ready. I do think that idea of design would fare better in the minds of many students if they were bombarded with less propaganda to the effect that darwinian evolution is beyond questioning and encouraged to do more than skim over what looks like confirming evidence.ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
02:32 PM
2
02
32
PM
PDT
gpuccio,
You seem to imply that my reasoning is: dFSCI exists, therefore biological information is designed. It’s not that way.
You wrote this earlier in the thread:
dFSCI is an empirical concept. The resoning goes this way, in order a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence. b) We define dFSCI as such a property. c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives. d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information).
And as recently as October, you were saying things like the following:
IOWs, a purely random system cannot generate dFSCI. A purely random system + NS cannot do that. If you know other necessity mechanisms that can be coupled to a purely random system and behave better, please declare what they are. Design can generate dFSCI (very easy to demonstrate). Therefore, the design inference remains the best explanation for what is observed (dFSCI in biological beings) and cannot be explained in any other way. [Emphases mine]
Which is it? For the record: Do you claim that dFSCI is a reliable indicator of design, or do you retract that claim?champignon
January 25, 2012
January
01
Jan
25
25
2012
02:03 PM
2
02
03
PM
PDT
Evolution works as a property of chemistry. Spiegleman's monster, having only a few dozen base pairs, evolved. When you bet against evolution you are betting against chemistry.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
01:32 PM
1
01
32
PM
PDT
If your money is bet that we will never reconstruct all the genomes that ever existed, you are safe. Or find a fossil representing every species that ever existed. If you are betting against progress on these front, you will lose.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
01:08 PM
1
01
08
PM
PDT
No, I don't think you think I lie to children! I just want you to know how passionately I think you shouldn't :) As for your portrayal of how biology is taught in America: I can't comment (I'm a Brit). But I can't believe it is as bad as you imply. Can you give me an example (for when I get back) of the "appeal to authority, exaggeration, misinformation, and above all, an explicit directive to withhold evidence and avoid critical examination"? Because that is the exact reverse of any proper brief for a science curriculum! Which should be: Appeal to empirical evidence and logical argument Consideration of limitations Accurate measurement Consideration of all the data, and of problem of cherry-picking Critical examination of all conclusions and the generation of alternative explanations for the data. I could almost have typed that off any science education program :)Elizabeth Liddle
January 25, 2012
January
01
Jan
25
25
2012
12:58 PM
12
12
58
PM
PDT
Google is working on cars that drive themselves. The potential for safety and more efficient traffic flow is huge. Plus we wouldn't need parking lots because cars would drop us off and pick us up. We could use them as a service rather than have our own. But I don't see what you're getting at. This is all stuff we design. None of it is possible without setting a target. Even if evolution could innovate all this stuff, what would you get without setting a target? The best blender ever that can chop anything, rolls over obstacles and climbs vertical surfaces to reach the vegetables, and writes poetry?ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
12:55 PM
12
12
55
PM
PDT
You can also get some really cool stuff if you take some English text and use Google Translator to run it through several languages and then back to English.
That, or should have been the issue. If 'tis nobler in the mind of the bear Radio and slings and outrageous fortune Or to take arms against a sea challenge And by opposing end them? Kufa, to sleep
V: You Beat: Resistance is futile. Let you have not been corrupted as Obi-wan. V: Is there any escape? I do not destroy them: Luke, we recognize its importance. Are beginning to reveal its power. Participation in, and I complete their training. Our collective strength to our Galaxy, we can end this destructive conflict. L: I can not join you. V: If you only knew the dark side of you. WAN group is that what happened to my Father. L: He said to me. He killed you. V: No, I am your father. L: No, no, it does not correspond to reality: it is not possible. V: Search your feelings, you know, that's true. L: No!
ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
12:42 PM
12
12
42
PM
PDT
Ever read James Joyce? I'm not going to argue that programs can do everything humans can do. I will predict that computers will tend to take over more and more tasks that were once the domain of brains. At first they won't do them as well, but eventually they will do them better. Already there are airplanes that can't be flown by humans. I suspect in 50 years humans won't be allowed to drive on highways. It's alosing bet that evolutionary algorithms will not improve and begin managing more and more of our civilization. Of course they may have managed the financial bubble, so they can make the same kind of mistakes that humans make.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
12:41 PM
12
12
41
PM
PDT
But I think you are quite wrong in asserting that simple algorithms cannot generate original utterances.
Such as this:
I ate my leotard, that old leotard that was feverishly replenished by hoards of screaming commissioners. Is that thought understandable to you? Can you rise to its occasions? I wonder. Yet a leotard, a commissioner, a single hoard, all are understandable in their own fashion. In that concept lies the appalling truth.
Is that what you had in mind?ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
12:07 PM
12
12
07
PM
PDT
if incremental change is possible, that evolution will work.
I had no idea that this was a sticking point. I'm less than nobody. But for what it's worth, if separate protein domains or genomes are traversable by individual selectable variations then of course evolution would work. I don't think that anyone is saying that it's impossible just for the sake of saying it's impossible. But on the surface it appears implausible. More in-depth, rigorous examinations only reveal in more detail what an evolutionary process has to accomplish, and that hasn't worked in its favor at all. That leads to a reasonable, rational assumption that darwinian evolution is not the answer and hits the ball back into that court where it will stay forever or until some truly astonishing evidence is revealed. My money is on forever.ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
11:46 AM
11
11
46
AM
PDT
Elizabeth, I hope it goes without saying - I don't think you lie to children or anyone else. The evidence I'm aware of supporting and contradicting the extent of darwinian evolution is one thing. This makes up an entirely separate line of evidence. I find it incredible to believe that something supported by an abundance of evidence and not substantially contradicted can only be taught to children and young adults by appeal to authority, exaggeration, misinformation, and above all, an explicit directive to withhold evidence and avoid critical examination. It's damning. It's the kiss of death. It's commendable that you reject it, but most condone it. If they did not, Eugenie Scott wouldn't be able to attain an educational post higher than gym teacher. Those who condone it are complicit, and even if they told me the earth was round I'd want a credible second opinion. For the scientific community to condone such educational goals destroys their credibility with regard to this subject matter. You might question how they have condoned it. Okay, to regard it with ambivalence destroys their credibility. To silently disapprove at a minimum damages their credibility. I express myself in such strong words because I perceive a spirit of complacence. Truth does not hide from the light and try to break all the light bulbs. On top of the insufficiency of confirming evidence and the weight of disconfirming evidence, too many proponents of darwinian evolution behave conspicuously as though they have something to hide and openly wield ignorance as their weapon of choice. That's one heck of dark cloud hanging over it.ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
11:35 AM
11
11
35
AM
PDT
...an example of text that certainly has more than 1000 bits of functional complexity, according to my demobstration, and therefore allows a safe design inference. How do you believe that text was written? By evolution?”
I certainly know that brains embody evolutionary algorithms. It's quite clear when studying animal behavior and learning, which is the subject of my formal training. It's not as clear in the case of language. There was, about 40 or 50 years ago, a rather famous debated between B.F. Skinner and Noam Chomsky on this topic. Chomsky took the anti-evolution stance and convinced the linguistics community. He expounded a version of irreducible complexity that sounds exactly like Behe's. My own position is that they were both right and both wrong. This happens in the infancy of any science. Strong positions are taken by people before a phenomenon is understood. It seems necessary for the formation of testable hypotheses. But I think you are quite wrong in asserting that simple algorithms cannot generate original utterances. I don't think computers can pass an extended Turing Test, but they can certainly churn out syntactically correct sentences and paragraphs. They are quite capable, for example, of exploring a novel environment and generating a description that would pass any test of grammatical correctness. Not Shakespeare, but then neither am I.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
11:29 AM
11
11
29
AM
PDT
So you do realize this. Does it not follow that GA searching one space is no indication of whether one could search another?
Of course I realize this. I am completely committed to the concept that evolution is impossible if incremental change is impossible. I think it would be interesting if ID advocates would admit that if incremental change is possible, that evolution will work. There are, of course, many kinds of known genomic change. Koonin and Shapiro have listed many. I've recently seen the number 47 kinds mentioned. You also have to realize that change and selection are not limited to refining function. There are many kinds of sideways change, and many kinds of utility that are not obvious from a narrow utilitarian view. It is not obvious, for example, why dragging around pounds of tail feathers is useful. It seems to attract mates, but it the reason the mates select for tail feathers is obscure. It must also be noted that a simple loss of one function may be useful. Hence the famous loss of function mutations that enhance survival of bacteria exposed to antibiotics. These are some of the reasons that design is a troublesome concept. There is no single dimension of change that is obvious except in retrospect, and even then it may not be obvious. But back on topic, I think there is a reason why Thornton et al are looking for incremental pathways. It's an obvious entailment of evolution that such pathways exist. It's also true that pathways may be obscure.Petrushka
January 25, 2012
January
01
Jan
25
25
2012
11:15 AM
11
11
15
AM
PDT
"Evolutionary algorithms are the only process known to be able to navigate huge search spaces. The ID movement attributes magical capabilities to “intelligence” — the ability to see function in sequences that are completely disconnected. That is simply magic, and I challenge you to demonstrate a non-magical way this can be done."
Actually huge search spaces can be navigated, and usually more efficiently, by other algorithms and heuristics. I would hazard to predict that any ordered search space will be shown to be navigable by algorithmic methods at higher efficiency than a genetic algorithm. Besides, a successful GA will generally take advantage of existing algorithms and heuristic methods in order to provide trial and error optimizations that are otherwise impractical to test experimentally. This is all the wonderful application of intelligent solutions onto reasonably defined problems in order to arrive at an optimal solution. We do it all the time, but blind forces do not. Algoritms, genetic or otherwise, are both a product and tool of artifice. The "magical" capabilities of intelligence are nothing more than that which we observe by direct experience. Indeed, they seem quite magical, considering that neither a chainsaw nor an iPhone would ever come about without intelligence. It is the observation of highly contingent and specific, functionally purposeful configurations that register positive for design when we see them. What we observe in the chainsaw and in the iPhone are attributes for which non-intelligent causes are instinctually ruled out, because they are inadequate. It is those attributes which defy material explanations, absent the question-begging invocation of material processes as a cause for intelligence. Of course, we should welcome with open arms attempts to identify non-intelligent mechanisms for producing chainsaws, iPhones, satellites, and airplanes. However barring that, we note that beings in possession of certain attributes, namely the innate ability to model abstract concepts and their relationships with one another, and then express those concepts concretely, by whittling away excess matter, and adding some here and there. This ability begs an explanation, and so far material processes have been utterly impotent to do so. Reason demands that we consider intelligence a unique force in shaping matter for purpose, with foresight -- so that when we observe functional purpose in concert with very low probability, we can infer design. If material processes are ever vindicated as a causally sufficient mechanism for producing sophisticated engineering, we'll have no need of a design inference.material.infantacy
January 25, 2012
January
01
Jan
25
25
2012
11:09 AM
11
11
09
AM
PDT
That's not a plug btw, although I'd be delighted if you bought it. It just happens that I wrote it out of my utter commitment not to lie to my child.Elizabeth Liddle
January 25, 2012
January
01
Jan
25
25
2012
10:13 AM
10
10
13
AM
PDT
Obviously I do not think that children should be taught bogus facts. It is because I refuse to lie to children that I wrote this: http://www.amazon.com/Pip-Edge-Heaven-Elizabeth-Liddle/dp/0802852572Elizabeth Liddle
January 25, 2012
January
01
Jan
25
25
2012
10:05 AM
10
10
05
AM
PDT
It is right that children are taught the consensus view, and that it is the consensus view, but that all such views are provisional.
I agree in a sense. The underlying problem is that the consensus view is unwarranted, not that it is being taught. But it seems clear cut that many educators esteem maintaining that consensus view even if it means going out of their way to withhold relevant knowledge. There are two separate issues. One is that such behavior is despicable and unethical. It's one thing to withhold knowledge, another to do so under the pretense of education. Anyone who believes that students should be presented only the evidence that will enable them reach one specific conclusion, and that thinking about contrary evidence is harmful has no place in a classroom, let alone writing policy. (This is what makes me cross.) It's unfortunate that when my son goes to school, I must forewarn him that although teachers usually aim to educate, at times they will deliberately mislead. Without a doubt, telling a child that 'most scientists' believe something and withholding available contradictory evidence is deliberately misleading. The silver lining is that it prepares him for the real world where he must learn to carefully weigh propaganda regardless of where it comes from. But that is no excuse. The second issue is, should not anyone and everyone renew their skepticism of a consensus view that is protected with lies and half-truths told to children? It seems so obvious that it shouldn't need saying, but one does not depend upon deception to teach what is true or provisionally accepted. If someone tells us a story and backs it up with bogus facts, the normal reaction is to disbelieve the story and question their motivation, not to comb through it for accurate details and make excuses for the rest. Why should this be any different?ScottAndrews2
January 25, 2012
January
01
Jan
25
25
2012
09:46 AM
9
09
46
AM
PDT
Elizabeth:
It is IDists who have concluded that because Darwinian theory fails, ID is the default conclusion.
That is false and a blatant misrepresenation of the facts. So here it is AGAIN: Newton's First Rule AND the explanatory filter mandate that before a design inference be considered the more simple explanations must be eliminated, ie necessity and chance. So once we do that we can consider a design inference. ya see there is also the design criteria that has to be met.
Darwinians do NOT make the symmetrical claim that because ID fails, Darwinian theory is the default conclusion.
Hellooo- Newton's First Rule, the explanatory filter- helloooo- Darwinians don't make that claim because they are never in any position to make that claim. We go through you, you don't go through us-> Newton's First Rule and the EF. Get a grip already.Joe
January 25, 2012
January
01
Jan
25
25
2012
09:46 AM
9
09
46
AM
PDT
Thanks, gpuccio :) I would agree with them re the theory of gravity actually :) The theory of gravity isn't really a theory anyway, just a law, i.e. a very good mathematical predictive model. It doesn't explain anything, it just is. Darwinian evolution really is an explanatory theory, as are all the bits and pieces that form part of the current (and continuously evolving) version. But obviously, I don't agree that it is a "fact". See you in a few days :)Elizabeth Liddle
January 25, 2012
January
01
Jan
25
25
2012
09:24 AM
9
09
24
AM
PDT
Elizabeth: I am not so concerned about what students are taught. They should certainly be taught good philosophy of science and methodology, but I don't believe that happens. Regarding the issue of iD. I would say they should be taught the consensus view (neo darwinism), and that minority radically different views, including ID, exist.gpuccio
January 25, 2012
January
01
Jan
25
25
2012
09:23 AM
9
09
23
AM
PDT
1 4 5 6 7 8 14

Leave a Reply