Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 11: Borel’s Infinite Monkeys analysis and the significance of the log reduced Chi metric, Chi_500 = I*S – 500

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 (Series)

Emile Borel, 1932

Emile Borel (1871 – 1956) was a distinguished French Mathematician who — a son of a Minister — came from France’s Protestant minority, and he was a founder of measure theory in mathematics. He was also a significant contributor to modern probability theory,  and so Knobloch observed of his approach, that:

>>Borel published more than fifty papers between 1905 and 1950 on the calculus of probability. They were mainly motivated or influenced by Poincaré, Bertrand, Reichenbach, and Keynes. However, he took for the most part an opposed view because of his realistic attitude toward mathematics. He stressed the important and practical value of probability theory. He emphasized the applications to the different sociological, biological, physical, and mathematical sciences. He preferred to elucidate these applications instead of looking for an axiomatization of probability theory. Its essential peculiarities were for him unpredictability, indeterminism, and discontinuity. Nevertheless, he was interested in a clarification of the probability concept. [Emile Borel as a probabilist, in The probabilist revolution Vol 1 (Cambridge Mass., 1987), 215-233. Cited, Mac Tutor History of Mathematics Archive, Borel Biography.]>>

Among other things, he is credited as the worker who introduced a serious mathematical analysis of the so-called Infinite Monkeys theorem (just a moment).

So, it is unsurprising that Abel, in his recent universal plausibility metric paper, observed  that:

Emile Borel’s limit of cosmic probabilistic resources [c. 1913?] was only 1050 [[23] (pg. 28-30)]. Borel based this probability bound in part on the product of the number of observable stars (109) times the number of possible human observations that could be made on those stars (1020).

This of course, is now a bit expanded, since the breakthroughs in astronomy occasioned by the Mt Wilson 100-inch telescope under Hubble in the 1920’s. However,  it does underscore how centrally important the issue of available resources is, to render a given — logically and physically strictly possible but utterly improbable — potential chance- based event reasonably observable.

We may therefore now introduce Wikipedia as a hostile witness, testifying against known ideological interest, in its article on the Infinite Monkeys theorem:

In one of the forms in which probabilists now know this theorem, with its “dactylographic” [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel‘s 1913 article “Mécanique Statistique et Irréversibilité” (Statistical mechanics and irreversibility),[3] and in his book “Le Hasard” in 1914. His “monkeys” are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel’s image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[4]

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Let us emphasise that last part, as it is so easy to overlook in the heat of the ongoing debates over origins and the significance of the idea that we can infer to design on noticing certain empirical signs:

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Why is that?

Because of the nature of sampling from a large space of possible configurations. That is, we face a needle-in-the-haystack challenge.

For, there are only so many resources available in a realistic situation, and only so many observations can therefore be actualised in the time available. As a result, if one is confined to a blind probabilistic, random search process, s/he will soon enough run into the issue that:

a: IF a narrow and atypical set of possible outcomes T, that

b: may be described by some definite specification Z (that does not boil down to listing the set T or the like), and

c: which comprise a set of possibilities E1, E2, . . . En, from

d: a much larger set of possible outcomes, W, THEN:

e: IF, further, we do see some Ei from T, THEN also

f: Ei is not plausibly a chance occurrence.

The reason for this is not hard to spot: when a sufficiently small, chance based, blind sample is taken from a set of possibilities, W — a configuration space,  the likeliest outcome is that what is typical of the bulk of the possibilities will be chosen, not what is atypical.  And, this is the foundation-stone of the statistical form of the second law of thermodynamics.

Hence, Borel’s remark as summarised by Wikipedia:

Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

In recent months, here at UD, we have described this in terms of searching for a needle in a vast haystack [corrective u/d follows]:

let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system’s 10^57 atoms would undergo ~ 10^87 “chemical time” states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let’s do an illustrative haystack calculation:

 Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.

With this in mind, we may now look at the Dembski Chi metric, and reduce it to a simpler, more practically applicable form:

m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]. χ is “chi” and ϕ is “phi”

n:  To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from.)

o: So, since 10^120 ~ 2^398, we may do some algebra as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

Chi = – log2(2^398 * D2 * p), in bits

Chi = Ip – (398 + K2), where log2 (D2 ) = K2

p: But since 398 + K2 tends to at most 500 bits on the gamut of our solar system [our practical universe, for chemical interactions! (if you want , 1,000 bits would be a limit for the observable cosmos)] and

q: as we can define a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

Chi_500 =  Ip*S – 500, in bits beyond a “complex enough” threshold

(If S = 0, Chi = – 500, and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: A string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.)

r: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was intelligently designed. (For instance, no-one would dream of asserting seriously that the English text of this post is a matter of chance occurrence giving rise to a lucky configuration, a point that was well-understood by that Bible-thumping redneck fundy — NOT! — Cicero in 50 BC.)

s: The metric may be directly applied to biological cases:

t: Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

u: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

v: Therefore, we have at least one possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] .

But, but, but . . . isn’t “natural selection” precisely NOT a chance based process, so doesn’t the ability to reproduce in environments and adapt to new niches then dominate the population make nonsense of such a calculation?

NO.

Why is that?

Because of the actual claimed source of variation (which is often masked by the emphasis on “selection”) and the scope of innovations required to originate functionally effective body plans, as opposed to varying same — starting with the very first one, i.e. Origin of Life, OOL.

But that’s Hoyle’s fallacy!

Advice: when you go up against a Nobel-equivalent prize-holder, whose field requires expertise in mathematics and thermodynamics, one would be well advised to examine carefully the underpinnings of what is being said, not just the rhetorical flourish about tornadoes in junkyards in Seattle assembling 747 Jumbo Jets.

More specifically, the key concept of Darwinian evolution [we need not detain ourselves too much on debates over mutations as the way variations manifest themselves], is that:

CHANCE VARIATION (CV) + NATURAL “SELECTION” (NS) –> DESCENT WITH (UNLIMITED) MODIFICATION (DWM), i.e. “EVOLUTION.”

CV + NS –> DWM, aka Evolution

If we look at NS, this boils down to differential reproductive success in environments leading to elimination of the relatively unfit.

That is, NS is a culling-out process, a subtract-er of information, not the claimed source of information.

That leaves only CV, i.e. blind chance, manifested in various ways. (And of course, in anticipation of some of the usual side-tracks, we must note that the Darwinian view, as modified though the genetic mutations concept and population genetics to describe how population fractions shift, is the dominant view in the field.)

There are of course some empirical cases in point, but in all these cases, what is observed is fairly minor variations within a given body plan, not the relevant issue: the spontaneous emergence of such a complex, functionally specific and tightly integrated body plan, which must be viable from the zygote on up.

To cover that gap, we have a well-known metaphorical image — an analogy, the Darwinian Tree of Life. This boils down to implying that there is a vast contiguous continent of functionally possible variations of life forms, so that we may see a smooth incremental development across that vast fitness landscape, once we had an original life form capable of self-replication.

What is the evidence for that?

Actually, nil.

The fossil record, the only direct empirical evidence of the remote past, is notoriously that of sudden appearances of novel forms, stasis (with some variability within the form obviously), and disappearance and/or continuation into the modern world.

If by contrast the tree of life framework were the observed reality, we would see a fossil record DOMINATED by transitional forms, not the few strained examples that are so often triumphalistically presented in textbooks and museums.

Similarly, it is notorious that fairly minor variations in the embryological development process are easily fatal. No surprise, if we have a highly complex, deeply interwoven interactive system, chance disturbances are overwhelmingly going to be disruptive.

Likewise, complex, functionally specific hardware is not designed and developed by small, chance based functional increments to an existing simple form.

Hoyle’s challenge of overwhelming improbability does not begin with the assembly of a Jumbo jet by chance, it begins with the assembly of say an indicating instrument on its cockpit instrument panel.

The D’Arsonval galvanometer movement commonly used in indicating instruments; an adaptation of a motor, that runs against a spiral spring (to give proportionality of deflection to input current across the magnetic field) which has an attached needle moving across a scale. Such an instrument, historically, was often adapted for measuring all sorts of quantities on a panel.

(Indeed, it would be utterly unlikely for a large box of mixed nuts and bolts, to by chance shaking, bring together matching nut and bolt and screw them together tightly; the first step to assembling the instrument by chance.)

Further to this, It would be bad enough to try to get together the text strings for a Hello World program (let’s leave off the implementing machinery and software that make it work) by chance. To then incrementally create an operating system from it, each small step along the way being functional, would be a bizarrely operationally impossible super-task.

So, the real challenge is that those who have put forth the tree of life, continent of function type approach, have got to show, empirically that their step by step path up the slopes of Mt Improbable, are empirically observable, at least in reasonable model cases. And, they need to show that in effect chance variations on a Hello World will lead, within reasonable plausibility, to such a stepwise development that transforms the Hello World into something fundamentally different.

In short, we have excellent reason to infer that — absent empirical demonstration otherwise — complex specifically functional integrated complex organisation arises in clusters that are atypical of the general run of the vastly larger set of physically possible configurations of components. And, the strongest pointer that this is plainly  so for life forms as well, is the detailed, complex, step by step information controlled nature of the processes in the cell that use information stored in DNA to make proteins.  Let’s call Wiki as a hostile witness again, courtesy two key diagrams:

I: Overview:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA

II: Focusing on the Ribosome in action for protein synthesis:

The Ribosome, assembling a protein step by step based on the instructions in the mRNA “control tape” (the AA chain is then folded and put to work)

Clay animation video [added Dec 4]:

[youtube OEJ0GWAoSYY]

More detailed animation [added Dec 4]:

[vimeo 31830891]

This sort of elaborate, tightly controlled, instruction based step by step process is itself a strong sign that this sort of outcome is unlikely by chance variations.

(And, attempts to deny the obvious, that we are looking at digital information at work in algorithmic, step by step processes, is itself a sign that there is a controlling a priori at work that must lock out the very evidence before our eyes to succeed. The above is not intended to persuade such, they are plainly not open to evidence, so we can only note how their position reduces to patent absurdity in the face of evidence and move on.)

But, isn’t the insertion of a dummy variable S into the Chi_500 metric little more than question-begging?

Again, NO.

Let us consider a simple form of the per-aspect explanatory filter approach:

The per aspect design inference explanatory filter

 

You will observe two key decision nodes,  where the first default is that the aspect of the object, phenomenon or process being studied, is rooted in a natural, lawlike regularity that under similar conditions will produce similar outcomes, i.e there is a reliable law of nature at work, leading to low contingency of outcomes.  A dropped, heavy object near earth’s surface will reliably fall at g initial acceleration, 9.8 m/s2.  That lawlike behaviour with low contingency can be empirically investigated and would eliminate design as a reasonable explanation.

Second, we see some situations where there is a high degree of contingency of possible outcomes under initial circumstances.  This is the more interesting case, and in our experience has two candidate mechanisms: chance, or choice. The default for S under these circumstances, is 0. That is, the presumption is that chance is an adequate explanation, unless there is a good — empirical and/or analytical — reason to think otherwise.  In short, on investigation of the dynamics of volcanoes and our experience with them, rooted in direct observations, the complexity of a Mt Pinatubo is explained partly on natural laws and chance variations, there is no need to infer to choice to explain its structure.

But, if the observed configurations of highly contingent elements were from a narrow and atypical zone T not credibly reachable based on the search resources available, then we would be objectively warranted to infer to choice. For instance, a chance based text string of length equal to this post, would  overwhelmingly be gibberish, so we are entitled to note the functional specificity at work in the post, and assign S = 1 here.

So, the dummy variable S is not a matter of question-begging, never mind the usual dismissive talking points.

I is of course an information measure based on standard approaches, through the sort of probabilistic calculations Hartley and Shannon used, or by a direct observation of the state-structure of a system [e.g. on/off switches naturally encode one bit each].

And, where an entity is not a direct information storing object, we may reduce it to a mesh of nodes and arcs, then investigate how much variation can be allowed and still retain adequate function, i.e. a key and lock can be reduced to a bit measure of implied information, and a sculpture like at Mt Rushmore can similarly be analysed, given the specificity of portraiture.

The 500 is a threshold, related to the limits of the search resources of our solar system, and if we want more, we can easily move up to the 1,000 bit threshold for our observed cosmos.

On needle in a haystack grounds, or monkeys strumming at the keyboards grounds, if we are dealing with functionally specific, complex information beyond these thresholds, the best explanation for seeing such is design.

And, that is abundantly verified by the contents of say the Library of Congress (26 million works) or the Internet, or the product across time of the Computer programming industry.

But, what about Genetic Algorithms etc, don’t they prove that such FSCI can come about by cumulative progress based on trial and error rewarded by success?

Not really.

As a rule, such are about generalised hill-climbing within islands of function characterised by intelligently designed fitness functions with well-behaved trends and controlled variation within equally intelligently designed search algorithms. They start within a target Zone T, by design, and proceed to adapt incrementally based on built in designed algorithms.

If such a GA were to emerge from a Hello World by incremental chance variations that worked as programs in their own right every step of the way, that would be a different story, but for excellent reason we can safely include GAs in the set of cases where FSCI comes about by choice, not chance.

So, we can see what the Chi_500 expression means, and how it is a reasonable and empirically supported tool for measuring complex specified information, especially where the specification is functionally based.

And, we can see the basis for what it is doing, and why one is justified to use it, despite many commonly encountered objections. END

________

F/N, Jan 22: In response to a renewed controversy tangential to another blog thread, I have redirected discussion here. As a point of reference for background information, I append a clip from the thread:

. . . [If you wish to find] basic background on info theory and similar background from serious sources, then go to the linked thread . . . And BTW, Shannon’s original 1948 paper is still a good early stop-off on this. I just did a web search and see it is surprisingly hard to get a good simple free online 101 on info theory for the non mathematically sophisticated; to my astonishment the section A of my always linked note clipped from above is by comparison a fairly useful first intro. I like this intro at the next level here, this is similar, this is nice and short while introducing notation, this is a short book in effect, this is a longer one, and I suggest the Marks lecture on evo informatics here as a useful contextualisation. Qualitative outline here. I note as well Perry Marshall’s related exchange here, to save going over long since adequately answered talking points, such as asserting that DNA in the context of genes is not coded information expressed in a string of 4-state per position G/C/A/T monomers. The one good thing is, I found the Jaynes 1957 paper online, now added to my vault, no cloud without a silver lining.

If you are genuinely puzzled on practical heuristics, I suggest a look at the geoglyphs example already linked. This genetic discussion may help on the basic ideas, but of course the issues Durston et al raised in 2007 are not delved on.

(I must note that an industry-full of complex praxis is going to be hard to reduce to an in a nutshell. However, we are quite familiar with information at work, and how we routinely measure it as in say the familiar: “this Word file is 235 k bytes.” That such a file is exceedingly functionally specific can be seen by the experiment of opening one up in an inspection package that will access raw text symbols for the file. A lot of it will look like repetitive nonsense, but if you clip off such, sometimes just one header character, the file will be corrupted and will not open as a Word file. When we have a great many parts that must be right and in the right pattern for something to work in a given context like this, we are dealing with functionally specific, complex organisation and associated information, FSCO/I for short.

The point of the main post above is that once we have this, and are past 500 bits or 1000 bits, it is not credible that such can arise by blind chance and mechanical necessity. But of course, intelligence routinely produces such, like comments in this thread. Objectors can answer all of this quite simply, by producing a case where such chance and necessity — without intelligent action by the back door — produces such FSCO/I. If they could do this, the heart would be cut out of design theory. But, year after year, thread after thread, here and elsewhere, this simple challenge is not being met. Borel, as discussed above, points out the basic reason why.

Comments
Can I take it then that you have no way of telling whether any mutation (once it has happened) was the result of a stochastic process or a directed process? I'm quite willing to discuss NS once we've come to some agreement on this. Bydand
And I can't help it that you don't like teh answers I provided. Joe
Well, Joe, I see you're determined not to answer a straight question. I was hoping for more. Perhaps someone else could tell us how to determine what sort of process produced a given mutation? Bydand
How does NS "direct" seeing that whatever works good enough surives to reproduce? As I remeber evos are unable to present any positive evidence for their "theory". As for how can we tell- again it all comes down to origins. Other than that we will have to wait until someone unravels the internal programming. Joe
Ah, yes - Spetner and the NREH, where he tells us that mutations are a response to "signals" from the environment. Darwinists say that mutations are stochastic, and NS "directs" the result using information from the environment. As I remember it, Spetner was unable to present any positive evidence for his hypothesis in his book. Be that as it may, I'm telling you I do not know, post facto, how to determine whether or not any mutation was the result of a stochastic process, or of a directed one. So your representation of "my position" is incorrect. You, OTOH, seem very certain that both types of process are operating. I ask again - how do you know, and can you tell the result of one from the result of the other? For instance, all those mutations reported in Lenski's long-term experiments with E coli - were they the result of stochastics, or of directed processes? If only one cell out of a whole bunch of similar cells all enjoying the same environment exhibits a mutation, would this not count against such a mutation being the result of a signal from the environment? Don't get me wrong, Joe - I'm really interested in the science behind this Bydand
Well Bydand- your position sez they are all random yet no one can tell me how that was determined. But anyway "Not By Chance" by Dr Lee Spetner- it came out in 1997- he goes over this random mutation canard. Joe
Thanks, Joe! And can we tell which genetic variation is directed and which random? If so, how? Bydand
I will begin with the end: what you say in post 51.2.2.1 is correct, and I don’t understand how you may still have doubts that I think exactly that, because it was the starting point of all my discussion. The only thing I would clarify is that there is no necessity that the variants be lethal. Neutral variants too cannot bridge those ravines. For two reasons. If they are neutral functional variants of the starting sequence, they simply cannot reach the new island. If they are neutral because they happen in some inactivated gene, then they can certainly reach any possible state, in theory, but emoirically it is simplly too unlikely that they may reach a functional state.
OK, but it is that last point that is precisely at issue. It is not "simply" too unlikely at all. That's exactly the claim that needs to be backed up, as does the claim that there are no selectable intermediates. In other words, once you have a duplicate, there is no ravine (no lethal loss of function incurred by deactivating variation), and you have no way of knowing whether the ravine is level, or includes upward steps. Or, if you have, you have not presented them.
Confusion again about the methodology of probabilistic inference. A t test, or any equivalent method of statistical analysis, is only a mathemathical tool we apply to a definite model. Usually, the model is a two groups comparison for a difference in means, according to the scheme of Fisher’s hypothesis testing.
Well, I don't know what your point was in that case. An independent samples t-test tests the null hypothesis that two samples were drawn from the same population, on the assumption that that population has a particular probability distribution. Establishing the probability distributions in your data is key to figuring out the probabilities of certain observations under the null. You can't leave that part out, and both variant generation and variant selection are stochastic processes with probability distributions that need to be ascertained if you are going to make any conclusions about the probability of your observed data under your null.
As you well know, the only role of our t test, or other statistical analysis, is to reject (or not) the null hypothesis, which gives some methodological support to our alternative hypothesis. And we reject our null if we are convinced that our data are not well explained by a random model (because they are too unlikely under that assumption).
Right until your last sentence. That's the part that I dispute - indeed, I cannot parse it. I don't know what "well explained by a random model" means, and that is precisely my objection - your null is not well characterised. Sure, we reject the null if our data are unlikely under our null, but to do that we have to know what exactly our null is. "A random model" does not tell us that. What the null you are interested in is, in fact, the null hypothesis that evolutionary processes are responsible for theh observed data. So to model that null you have to model evolutionary processes. And to do that, for any given biological feature, you have to have either far more data than you actually have, or you have to estimate the probability distributions of those data (for instance, the probability distribution of certain environments favoring certain sequences at certain intermediate times between a posited duplication event and a posited observed different protein. Those are the probability functions you don't present, and can't even accommodate in Fisherian testing - you'd need some kind of Bayesian analysis.
Yes, it was “vague” :) And what you are trying to evade is a quantitative explanation of how protein domains could have emerged.
I'm not trying to evade it at all! I don't know how protein domains emerged, although there is, I understand, some literture on the subject. As I've said, I can think of very few (if any - maybe one or two possible cases) of naturally observed biological features for which we have a quantitative, or even qualitative explanation, and may never have - there are far too many known unknowns as well as unknown unknowns. What we have instead is a theory that explains patterns in the data (those nested hierarchies), as well as mechanisms that can be shown, in the lab and in the field, to produce the predicted effects, both of adaptation, and speciation. Sure there are huge puzzles - the mechanisms of variance production, the evolution of sexual reproduction, the origin of the first Darwinian-capable self-replicators, the origins of some of the most conserved sequenes (hox genes, protein domains). But we won't solve them by saying: this is improbable, therefore it didn't evolve. We attempt to solve them by saying: if this evolved, how did it do so? In other words by treating evolutionary theory not as the null but as H1. Or, more commonly, by comparing alternative explanatory theories. ID doesn't work as H1 unless you characterise the evolutionary null, and neither Dembski nor you attempt to do this. This is because ID is not, in fact, an explanation at all. It is a default.
But I am modeling the theory. I have modeled the random system, and I have also shown how a selectable intermediate can be added to the random modle, still having a quantitative evaluation. If more intermediates are known, they can be considered in the same way.
Well, I'd like to see your model, but from what you have told us, it doesn't seem to be a model of evolutionary theory! Is it implemented in code somewhere? What is it written in?
Indeed, even that single intermediate I have assumed does not exist. So, if you want to affirm that basic protein domains evolved by gradual variation bearing differential reproduction, be my guest. Show one example of such a transition. Show what the intermediates were, or could have been. Explain why they were selectable. And then, with real intermediates on the table, we will be able to apply my method to compute if the system is credible.
But why should those putative intermediates still exist? And how can we show that they were "selectable" without knowing the environment in which the population who bore them lived? As I keep saying, you can't model selection without modeling the environment, which includes not only the external environment, but the population itself (and its current allele frequencies) and the genetic environment. It's a dynamic, non-linear system, and trying to model it to explain the origins of protein domains is a bit like trying to explain the ice age by modeling the weather in Thule on some Friday several thousand years ago. In other words, it can't be done. But that doesn't justify the conclusion that "it didn't happen, therefore ID". That's why, if you want to research ID, it needs to be researched as a positive hypothesis, not simply as the default conclusion in the absence of an evolutionary one. Which was why I was interested in the front-loading thread, although I don't think Genomicus' hypothesis works. Anyway, nice to talk to you, even if we never agree on this :) Gotta run. Lizzie Elizabeth Liddle
I gather that you believe that genetic variation is not a stochastic process – that it is directed in some way.
At least some, if not most, but not all- random stuff still happens.
And you also believe that Sanford’s claim of looming genetic meltdown is credible, and backed by good experimental data.
Nope- his claim only pertains to stochastic processes. If living organisms are designed and evolved by design then his claim is moot. Joe
So, Joe... I gather that you believe that genetic variation is not a stochastic process - that it is directed in some way. And you also believe that Sanford's claim of looming genetic meltdown is credible, and backed by good experimental data. I'm a bit befuddled, then, as to just what or who, in your view, is directing this genetic entropy, and why. Is it an inimical intelligent designer? Or are only beneficial mutations "directed"? Bydand
Elizabeth: While I really feel I owe champignon really nothing, and therefore will not answer him any more, your case is completely different. You are sincere and intelligent. So I feel I owe you some final clarifications, and I will give them. But if still you believe that what I say has no sense, I would really leave it to that. I have no problem with the simple fact that you think differently. And I thank you for giving me the chance to express, and detail, my ideas. I will begin with the end: what you say in post 51.2.2.1 is correct, and I don't understand how you may still have doubts that I think exactly that, because it was the starting point of all my discussion. The only thing I would clarify is that there is no necessity that the variants be lethal. Neutral variants too cannot bridge those ravines. For two reasons. If they are neutral functional variants of the starting sequence, they simply cannot reach the new island. If they are neutral because they happen in some inactivated gene, then they can certainly reach any possible state, in theory, but emoirically it is simplly too unlikely that they may reach a functional state. Indeed, I have modeled the emergence of protein domains, specifying that no inmtermediate is known for them, and that therefore their emergence should be modeled at present as a mere effect of RV. However, I have hypothesized how the global probability of a new functional domain could be affected (at most) by the demonstration of a single, fully selectable intermediate with optimal properties. That intermediate indeed is not known, and I do believe that it does not exist, but it was very important for me to show that the existence of selectable intermediates can, if demonstrated, be including in the modeling of global probabilities of a transition, if we reason correctly on the cause effect relationship we have empirically found (or assumed). It seems a very simple, and correct, reasoning to me, but if you don't agree, no problem. Certainly, the reasons why you have said you don't agree, up to now, make no sense for me. Well, I can only disagree, and say that from where I am standing it really does seem as thought it is you who are confused. Or at any rate it is not clear what you are saying. Certainly we can draw conclusions about cause and effects from probability distributes, and we do so every time we conduct a t test. Not only that, but when we model certain causes and effects, we can model them as probability distributions. I gave a clear example: a variant sequence that leads to light colouring on a moth affects the probability distribution of deaths by predation. Confusion again about the methodology of probabilistic inference. A t test, or any equivalent method of statistical analysis, is only a mathemathical tool we apply to a definite model. Usually, the model is a two groups comparison for a difference in means, according to the scheme of Fisher's hypothesis testing. That means, as you well know, that we have a null hypothesis, and we have an alternative hypothesis. The null hypothesis is that what we observe is well explained by random factors (usually random variation due to sampling). But the alterbative hypothesis is a necessity hypothesis: we hypothesize that some specific and explicit cause is acting, with a definite logical explanatory model. As you well know, the only role of our t test, or other statistical analysis, is to reject (or not) the null hypothesis, which gives some methodological support to our alternative hypothesis. And we reject our null if we are convinced that our data are not well explained by a random model (because they are too unlikely under that assumption). This is very different from what you say. And it is exactly what I say. Causal relations and probabilistic modeling are two different things, that in the end contribute both to the final explanatory model we propose. And as for evading into “vague” definitions (if that’s what you mean, I’m not sure, given the typos!) well, I’m certainly not trying to evade anything. Yes, it was "vague" :) And what you are trying to evade is a quantitative explanation of how protein domains could have emerged. You may not believe that model is a good fit to the data, but modelling something different, and then showing that your model doesn’t work isn’t going to falsify the theory of evolution because you aren’t modeling the theory of evolution! But I am modeling the theory. I have modeled the random system, and I have also shown how a selectable intermediate can be added to the random modle, still having a quantitative evaluation. If more intermediates are known, they can be considered in the same way. Indeed, even that single intermediate I have assumed does not exist. So, if you want to affirm that basic protein domains evolved by gradual variation bearing differential reproduction, be my guest. Show one example of such a transition. Show what the intermediates were, or could have been. Explain why they were selectable. And then, with real intermediates on the table, we will be able to apply my method to compute if the system is credible. I gave a clear example: a variant sequence that leads to light colouring on a moth affects the probability distribution of deaths by predation. Well, if you want to mopdify my model by saying tyhat the presence of a selectable intermediate will increase the probability of reproduction of (how much? you say), and not of 100% (that was my maximal assumption), that only means that you can do the same computations, with my same method, and the difference in probability between the pure random system and the mioxed system will be less than what I have found. I have simply assumed the most favorable situation for the darwinian algorithm. gpuccio
gpuccio:
I don’t know if you are really interested in a serious discussion (champignon evidently is not).
gpuccio, I engaged your argument directly, using quotes from what you wrote, and showed that according to your own statements, dFSCI cannot tell us whether something could have evolved. If you won't stand behind what you wrote, why should the rest of us take you seriously? And if you disagree with yourself and wish to retract your earlier statements, please be honest and admit it. Show us exactly where you believe your mistakes are and how you wish to correct them. My earlier comment:
gpuccio, By your own admission, dFSCI is useless for ruling out the evolution of a biological feature and inferring design. Earlier in the thread you stressed that dFSCI applies only to purely random processes:
As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately.
But evolution is not a purely random process, as you yourself noted:
b) dFSCI, or CSI, shows me that it could not have come out as the result of pure RV. c) So, some people have proposed an explanation based on a mixed algorithm: RV + NS.
And since no one in the world claims that the eye, the ribosome, the flagellum, the blood clotting cascade, or the brain came about by “pure RV”, dFSCI tells us nothing about whether these evolved or were designed. It answers a question that no one is stupid enough to ask. ["Could these have arisen through pure chance?"] Yet elsewhere you claim that dFSCI is actually an indicator of design:
Indeed, I have shown two kinds of function for dFSCI: being an empirical marker of design, and helping to evaluate the structure function relationship of specific proteins.
That statement is wildly inconsistent with the other two. I feel exactly like eigenstate:
That’s frankly outrageous — dFSCI hardly even rises to the level of ‘prank’ if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out…
You have an obligation to make it clear in future discussions that dFSCI is utterly irrelevant to the “designed or evolved” question. In fact, since dFSCI is useless, why bring it up at all? The only function it seems to serve is as pseudo-scientific window dressing.
And despite your continual references to "post 34 and following" in this thread, you make the same mistakes there as you do in this thread: 1. You assume a predefined target, which evolution never has. 2. You assume blind search, which is not how evolution operates. Your only "concession" is to model two consecutive blind searches instead of one, as if that were enough to turn your caricature into an accurate representation of evolution. 3. Even granting your assumptions, you get the math wrong, as Elizabeth pointed out to you. 4. You make other wild, unsupported assumptions and you pick the numbers used in your example so that - surprise! - the conclusion is design. I hope you'll take up eigenstate's challenge:
I claim you have not, cannot, and will not provide a mathematical model for “RV” in your dFSCI argument that captures the key, essential dynamic of the “RV” in “RV + NS” — incremental variations across iterations with feedback integrations accumulating across those same iterations. This is what makes “Methinks it is like a weasel” impossible per your contrived probabilities and producible in just dozens of iterations with a cumulative feedback loop.
Given your past behavior, I'm not holding my breath. champignon
According to the AVIDA data (small genomes, asexually reproducing population) virtual organisms that can perform complex logic operations evolve from starter organisms that do no more than reproduce (no logic functions).
Given totally unrealistic parameters, mutation rates and evrything else. The Sanford paper demonstrates what happens when reality hits. Joe
All human kids get 1/2 from dad and 1/2 from mom. I am pretty sure that is par for the course wrt sexual reproduction.
Yes, but that doesn't mean you are "throwing out" half of each genome. It means that kids get half of each parent genotype (which have a lot in common to start with). And if there's more than one kid, more than half the parental genetic material will get passed into the next generation. And in any case, that material will be found in other members of the population. Only rarely will sequences be completely lost.
But anyway starting with an asexually reproducing population (with a small genome) stochastic processes cannot construct anything more complex-> that is according to the data.
No. According to the AVIDA data (small genomes, asexually reproducing population) virtual organisms that can perform complex logic operations evolve from starter organisms that do no more than reproduce (no logic functions). So yes, those stochastic processes do exactly what you say they can not - enable a population to evolve from "can do no logic functions" to "can do complex logic functions". Elizabeth Liddle
And when you add sexual reproduction you have to throw out 1/2 of each genome- do you do that?
No, you don’t throw out half of each genome.
All human kids get 1/2 from dad and 1/2 from mom. I am pretty sure that is par for the course wrt sexual reproduction. But anyway starting with an asexually reproducing population (with a small genome) stochastic processes cannot construct anything more complex-> that is according to the data. Joe
But the first organisms would have been asexually reprducing with small genomes.
yes.
And when you add sexual reproduction you have to throw out 1/2 of each genome- do you do that?
No, you don't throw out half of each genome. Nor do you even throw out half of each genotype, unless each parental couple only produce one offspring. Obviously that's not how you set it up, otherwise your population would go extinct. And even if some couples do only produce one offspring, you still have samples of their un-passed on genetic sections all over the population. That's why evolution works much faster in sexually reproducing populations (I make mine hermaphroditic though, as it saves time). Elizabeth Liddle
But the first organisms would have been asexually reprducing with small genomes. And when you add sexual reproduction you have to throw out 1/2 of each genome- do you do that? Joe
@gpuccio#50,
I don’t know if you are really interested in a serious discussion (champignon evidently is not). If you are, I invite you too to read my posts here: https://uncommondescent.com.....selection/ (post 34 and following) and comment on them, instead of just saying things that have no meaning.
I've read through that section, more than once now, thank you. That, combined with the key insights gained from comments made by Dr. Liddle and petrushka were the catalyst for "getting it", in terms of your views on dFSCI. I'm terribly disappointed in what I came to realize was the substance of your argument/metric, but I don't think it's a matter of not devoting time to understand it. The disappointment comes from understanding. I was much more interested and hopeful you were on to something when I was confused by what you were saying.
That is both wrong and unfair. I have addressed evolutionary processes in great detail. Read and comment, if you like. And yes, english is not my primary language at all, but I don’t believe there is any mystery or misunderstanding in the way I use RV. It measn random variation, exactly as you thought.
No, it can't mean "random variation" as I thought, because random variation as I thought spreads those variation ACROSS GENERATIONS. That means the variation spans iterations, and because that variation spans iterations, it can (and does) incorporate accumulative feedback from the environment. As soon as you regard "random variation" as variation across generations in populations that incorporate feedback loops, your probabilistic math goes right out the window. Totally useless. This is precisely what Dawkins was reacting to with the Weasel example. From his book:
I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence
Given the setup of the thought experiment, the "one shot" odds of a random typing on the 26-char keyboard are greater that 10^39 against "Methinks it is like a weasel". But in just a few dozen iterations, based on the VARIATION ACROSS GENERATIONS WITH A CUMULATIVE FEEDBACK LOOP incorporated into it, the target string is produced. It's unimaginable that you are not familiare with Dawkins' Weasel argument for the importance of cumulative feedback loops, and yet, your dFSCI NOWHERE accounts for the effects of feedback loops interstitial with random variations. Every single appeal you've made in dFSCI (and I've read a LOT from you on this now) is "single shuffle" random sampling. If I'm wrong, I stand to be corrected, and invite you to point me to just ONE PLACE where you've applied your probability calculations in a way that incorporates the accumulative feedback across many iterations as those variations take place. If you are unable to do so, then I suggest I've been more than fair, I've been a chump hoodwinked by obtuse arguments here. Shame on me for being a chump, if so, but in that case, you've no basis for complaining about unfair treatment, and have been the benefactor of generous charity in reading your polemics that you in no way have earned, given what they actually entail. Show me one place where you've applied your math across generations where those generations each have their own random changes, and incorporate feedback, and I think we will again have something relevant to evolution and/or design to discuss. Barring that, you're just wasting bandwidth here in committing the tornado-in-a-junkyard fallacy, obscured by vague and cryptic and inchoate prose surrounding it.
Excuse me, in english, I believe, “random variation” means just what it means: variation caused by random events.
Variation across generations with feedback loops -- which is what is entailed by evolutionary models -- produces totally different calculations than "random samples". A single configuration pulled at random from a phase space will easily produce vanishingly small probabilities for that configuration. An ensemble of configurations that iterate and vary over generations with accumulating feedback from the environment can (and does) "climb hills", reducing over those iterations the probabilities to favor and even probable (or inevitable) odds. Everything depends on the inclusion of iterations with feedback, coincident with those variations. It doesn't matter if you think that Joe Q. Public on the street supposes "random variation" is just fine as a term for a "random one-time sample". That isn't cognizant of the evolutionary dynamics, and as such, is just tornado-in-a-junkyard thinking.
The only mechanism not included in RV is NS. As you can verify if you read my posts, I have modeled both RV (Using the concept of dFSCI and NS. Whatever you folks may like to repeat, dFSCI is very useful in modeling the neo darwinian algorithm.
I do not know of any supporter of Darwinian theory that would recognize evolutionary theory AT ALL in your calculations. You don't include any math for variations ACROSS GENERATIONS. You say "I have modeled both RV...", but you HAVEN'T modeled RV as is denoted by "RV + NS". The "RV" in "RV + NS" spreads incremental variances across generations, with feedback also accumulating across those generations. I claim you have not, cannot, and will not provide a mathematical model for "RV" in your dFSCI argument that captures the key, essential dynamic of the "RV" in "RV + NS" -- incremental variations across iterations with feedback integrations accumulating across those same iterations. This is what makes "Methinks it is like a weasel" impossible per your contrived probabilities and producible in just dozens of iterations with a cumulative feedback loop. The math will tell the tale here. We great thing about this is we don't need to rely on polemics or bluster. All we have to do to succeed in our arguments is show our math so everyone can see it, test it, and judge the results for themselves. If you aren't just wasting my time and the time of so many others here, please let me invite you to SHOW YOUR MATH, and demonstrate with an applied example how you calculate the probabilistic resources you use for your "RV" and the probabilities that actualy obtain from the numerators and denominators you identify. This can and should be worked out, agreeably and objectively, by just having each of us support what we say with the applied maths. eigenstate
I understand too, that these “creatures” replicate asexually.
Yes, that's my understanding. I was quite surprised. I usually mate my critters, because they evolve much faster that way :) Elizabeth Liddle
And BTW, asexually reproducing organisms with small genomes is allegedly how the diverstity started. Joe
Whatever I am not convinced by anything evos say- way too many problems with their "models". So if you have anything, anything at all that supports the claim tha stochastic processes can construct new, useful multi-protein configurations I'd be very glad to hear of it! Joe
I'll have a go at trying to pin point it: I think what you may be saying is that new protein domains are too "brittle" to have evolved - that, in fitness landscape terms, each is separated from its nearest possible relative by a lethal ravine? And that evolutionist are at a loss to explain how incremental, non-lethal variants could have bridged those ravines? Elizabeth Liddle
@49.1.1.1.2 Well, Joe, as I understand it, the “organisms” in AVIDA have very much smaller genomes than in “real life”, and possibly the population sizes are somewhat limited. I understand too, that these “creatures” replicate asexually. My reading indicates that population geneticists would be very unsurprised if there was a build-up of deleterious mutations in a small population of asexually reproducing organisms with small genomes Is not Sanford a YEC, maintaining that all life was created a few thousand years ago, and is even now heading for genetic meltdown? Did he not, in a book he wrote about genetic entropy, use the biblically-reported long lives of Biblical patriarchs as evidence that the human genome was deteriorating? Would he not expect that faster-reproducing species would be teetering on the brink of extinction by now? Yet bacteria, baboons, and blue whales are, as far as I am aware, genetically healthy. You’d think too, that all those thousands upon thousands of generations of E coli that Lenski’s lab bred in their long-term experiments would have revealed some evidence of genetic entropy if it was such a problem. So far as I am aware, no such thing appeared No, I’m not convinced by the work you cite, there are too many problems with the model. But if you have any more evidence for your stance, (and surely this must be a productive area for peer-reviewed ID science) I’d be very glad to hear of it! Bydand
OK, but tbh I think it is you who are confused:
It’s not the same with NS. In NS, as I have told you many times, there is a specific necessity relation between the fucntion of the varied sequence and replication. Now, forget for a momemt your adaptational scenarios of traditional darwinism,
Why should I "forget" the very system we are discusssing?! Weirdly, I've pointed out several times that you are "forgetting" the environment-phenotype relationship, and you say you aren't - then you tell me to "forget" it! But OK, for the sake of discussion, I will put it to one side....
and try to think a little in molecular terms and in terms of biochemical activity, exactly what darwinism cannot explain. In terms if molecular activity of an enzyme, you can in most cases trace a specific necessity relationship between that activity and replication. For instance, if DNA polimerase does not work, the cell cannot replicate. If coagultaion does not work, the individual often dies. And so on.
Sure. Clearly if a variant sequence is incompatible with life or reproduction, the phenotype dies without issue. Only variants that are compatible with life and reproduction ever make it beyond one individual.
There is absolutely no reason to “draw a “cause-and-effect” relation from a probability distribution. You are really confused here. Probability dostributions describe situations where the cause and effect relation is not known, or not quantitatively analyzable. A definite cause effect relation will give some specific “structure” to data which is not explained by the probability distribution. That’s a good way, as you know, to detect causal effects in data.
Well, I can only disagree, and say that from where I am standing it really does seem as thought it is you who are confused. Or at any rate it is not clear what you are saying. Certainly we can draw conclusions about cause and effects from probability distributes, and we do so every time we conduct a t test. Not only that, but when we model certain causes and effects, we can model them as probability distributions. I gave a clear example: a variant sequence that leads to light colouring on a moth affects the probability distribution of deaths by predation. So you will have to clarify what you are saying, because what you have written, on the face of it, makes no sense. And I just don't find the rest of your post clarifies it any further. You seem to have constructed a model that doesn't reflect what anyone thinks actually happens. And as for evading into "vague" definitions (if that's what you mean, I'm not sure, given the typos!) well, I'm certainly not trying to evade anything. I'm trying to tie down those definitions as tightly as I can! And the fact is (and it is a fact) that the theory of evolution posits a model in which stochastic processes result in genetic variations with differential probabilities (again, stochastic) of reproductive success. You may not believe that model is a good fit to the data, but modelling something different, and then showing that your model doesn't work isn't going to falsify the theory of evolution because you aren't modeling the theory of evolution! And no, the theory of evolution is not a "useless scientific object". From it we derive testable hypotheses that are tested daily, and deliver important findings that have real benefits, as well as increasing our understanding of the amazing world we live in. But really, gpuccio, we are not communicating at all here. I know it is frustrating for you, but your posts simply are not making sense to me. I can't actually parse what you are saying. And what you seem to be saying seems to be to be demonstrably not true. Natural selection cannot but be other than a stochastic process, except, I guess, in the extreme case of variants that are incompatible with life, and they don't get passed on, so are irrelevant to the process. What is that I'm not seeing? What is it that you are not seeing? Elizabeth Liddle
There is absolutely no reason to “draw a “cause-and-effect” relation from a probability distribution.
Quantum effects come to mind. Boyle's Law. Maybe others. Petrushka
Elizabeth: I am afraid we will never agree on that. I do think you are confused. The "necessity" created by the laws of chemistry is exactly the kind of necessity that is the base of random systems, like the tossing of a die: necessity it is, but we can describe it only probabilistically. It's not the same with NS. In NS, as I have told you many times, there is a specific necessity relation between the fucntion of the varied sequence and replication. Now, forget for a momemt your adaptational scenarios of traditional darwinism, and try to think a little in molecular terms and in terms of biochemical activity, exactly what darwinism cannot explain. In terms if molecular activity of an enzyme, you can in most cases trace a specific necessity relationship between that activity and replication. For instance, if DNA polimerase does not work, the cell cannot replicate. If coagultaion does not work, the individual often dies. And so on. Here. it is not so much the case of evading a predator, but of having the fundamental functions by which the cell, or the multicellular being, do survive. Those functions are incredibly sophisticated at molecular level. And we have to explain them. There is absolutely no reason to "draw a “cause-and-effect” relation from a probability distribution. You are really confused here. Probability dostributions describe situations where the cause and effect relation is not known, or not quantitatively analyzable. A definite cause effect relation will give some specific "structure" to data which is not explained by the probability distribution. That's a good way, as you know, to detect causal effects in data. Causal effects that can be described (you can say what is the cause, how it acts, and trace the connection in explicit terma) are not "drawn" from a probability distribution. They just "modify" the observed data. If you superimpose a causal relationship (like a sequence that improves survival) you can still see the final effect as probabilistic if it is mixed to other random effects that you cannot know explicitly. So, I agree with you that, if a sequence has a positive effect on survival, and is selectable, it will tend to propagate in the population in a way that is not completely predictable, because too many other unknown factors contribute to the final effect. But that does not make the known effect of the sequence similar to the other unknown factors. Because we know it, we understand it, we can reason on it, and we cna model it deterministically. When I assume that a selectable gene is optimally selected, conserved and propagated, I am not making an error: I am simply considering the "best case" for evolution thrpugh NS. Although that perfect scenario will never happen, it is however a threshold of what can happen. In any other, more realistic, scenario, the effect of NS will be weaker. That is a perfectly reasonable procedure. It allows us to compute the effect of NS if it were the strongets possible. And to have quantitative predictions for the behaviour of the system under those conditions. And I am not ignoring nother sources of variance. I have considered all possible sources of variance in my probabilistic modeling of RV. Then I have considered the maximum benefic effect that NS can have in improving the result: you would say, with your strange terminology, "biasing" it in favour of the functional result. So, I really don't understand your objections. I believe that my reasoning is correct. Your objections are not offerinf any better way to model your proposed mechanism: as darwinists often do, you evade again into cague definitions, and the result is that you fight against any quantitative analysis, because darwinism itself fears quantitative analysis. If you really think that my reasoning is wrong, you propose how to model a specific system, how to verify that it can do what you say it has done. We cannot go on believing in the fairy tale of neo darwinism only out of faith and vague definitions. If there is no way to verify or falsify what your so called stochastic system can do or not do in reality, it is a completely useless scientific object. gpuccio
But I don’t see why, even if the origins of life were non-stochastic, subsequent happenings couldn’t be entirely stochastic.
So someone/ something went through all the trouble to design living organisms and a place for them and then left it all up to stochastic processes? As I said that would be like saying the car is designed but motors around via stochastic processes. See also: The effects of low-impact mutations in digital organisms Chase W. Nelson and John C. Sanford Theoretical Biology and Medical Modelling, 2011, 8:9 | doi:10.1186/1742-4682-8-9
Abstract: Background: Avida is a computer program that performs evolution experiments with digital organisms. Previous work has used the program to study the evolutionary origin of complex features, namely logic operations, but has consistently used extremely large mutational fitness effects. The present study uses Avida to better understand the role of low-impact mutations in evolution. Results: When mutational fitness effects were approximately 0.075 or less, no new logic operations evolved, and those that had previously evolved were lost. When fitness effects were approximately 0.2, only half of the operations evolved, reflecting a threshold for selection breakdown. In contrast, when Avida's default fitness effects were used, all operations routinely evolved to high frequencies and fitness increased by an average of 20 million in only 10,000 generations. Conclusions: Avidian organisms evolve new logic operations only when mutations producing them are assigned high-impact fitness effects. Furthermore, purifying selection cannot protect operations with low-impact benefits from mutational deterioration. These results suggest that selection breaks down for low-impact mutations below a certain fitness effect, the selection threshold. Experiments using biologically relevant parameter settings show the tendency for increasing genetic load to lead to loss of biological functionality. An understanding of such genetic deterioration is relevant to human disease, and may be applicable to the control of pathogens by use of lethal mutagenesis.
IOW stochastic processes just don't measure up to the task. Joe
Oops! sorry for typo - please substitute "living" for "lieing" Bydand
It was as good an answer as I can give - I simply don't know how, or whether, a determination of the stochastic nature of these processes was made. But I don't see why, even if the origins of life were non-stochastic, subsequent happenings couldn't be entirely stochastic. So have you any evidence or data to support your inference; or can you say why that inference is justified? There are, I believe, those who think that although a designer gave life its start, life was then pretty much left to get on with lieing. Do you think this incorrect? Why? Bydand
Umm that doesn't answer my question, however I do have a reason to doubt gene duplications are stochastic- the OoL-> as in the only reason to infer gene duplication is a stochastic process is if living organisms arose from non-living matter via stochastic processes. IOW as we have been saying all along- the origins is what counts. We do not say cars are designed but the way they get around is entirely stochastic. Joe
I don't know, Joe. Have you reason, and data or evidence, that cause you to believe that the processes of duplication and modification were anything but stochastic? Bydand
For clarity GAs are a good model for intelligent design evolution and even front-loaded evolution. But they mean nothing to stochastic evolution. Joe
OK, so for clarity, that means that GA's are a good model for evolution? - (leaving aside for just one second the question of whether or not there is/was an intelligent designer - I mean the actual operation of a GA once in place) Bydand
Yes, it depends on the characteristics used. No, not all sets of items possess characteristics that can be used to place them in a rigorously nested, deep, hierarchy. Living things do, as noted by Linnaeus.
Linneaus did not posit his nested hierarchy on descent with modification and he used it as evidence for a common design. Evos hijacked his idea, changed archetype with common ancestor, and called it their own.
Your second sentence is just wrong. Check out any paper on cladistics. In fact there are even online programs where you can run the algorithms yourself.
As for cladistics, again clades are constructed based on shared similarities, meaning they are constructed based on the rules of a nested hierarchy. The from that ancestral relationships are assumed. Also a clade is only a nested hierarchy in that all descendents of a common ancestor will consist of and contain all of its defining characteristics but it isn't a nested hierarchy in that the common ancestor does not consist of nor contain its decendents. But we know characteristics can be lost so there is no reason to assume all descendents will have them. Joe
No. NS is a necessity process, that intervenes on and is modulated by random processes.
But this is where I think you are going wrong! As I have said, I think it is spurious to make the distinction between "random" processes and "necessity" processes. If you are defining "random" as "a system whose behaviour can best be described by some probability distribution", then both the process by which a DNA sequence in an offspring is different from those in a parent, and the process by which the possessors of one sequence tend to leave more offspring then another, are "systems whose behaviour can best be described by some probability distribution". On the other hand, if by "necssity" process, you mean one driven by, say, physical/chemical laws, then both processes are also "necessity" processes. A DNA sequence, when it duplicates itself, is exposed to chemical forces that determine just how it recombines itself into a new slighlty different sequence. Similarly, when a light moth on a dark tree is eaten by an owl, that is because of the physical processes that mean that more light is reflected from its wings than from the bark, triggering contrast neurons in the owls visual cortex. Both parts of the system are both "random" and "necessity" processes; indeed the probability distribution function that describes the behaviour of the "random" has the form it has because of various "necessity" laws. There are good physical/chemical reasons why some mutations are more likely than others, just as there are highly stochastic processes that govern whether or not a light moth happens to catch the eye of a passing owl. What IS different the two is that variance producing mechanisms have a less systematic relationship with function than "NS". Variants will tend to be, if not orthogonal to function, no more likely to be more useful than the parent sequence than less, unless the population is extremely badly adapted to start with, and may be more likely to be less than more, if the population is already well adapted. On the other hand, NS, by definition, is the process by which sequences that tend to promote better replication are themselves more often replicated - indeed that's all it means. So NS is a bias in favour of better function, whereas RV is not. In fact you could simply describe evolution as the bias by which RVs that promote better replication are filtered out from those that don't.
I have tried to make that clear in my modleing. NS is a necessity process because we can define a cause effect relationship between the variation and its consequences on reproduction. For instance, I have modeled the effect of the selection of a possible intermediate, but to do that I had to make specific assumptions on how the intermediate affected reproduction, and on how it was selected (for instance, assuming that the local function of the intermediates directly improved reproductive success, and that it could be otpimally selected. Then, once evaluated the necessity consequences of such a selected intermediate, I computed the difference in the global random system in the two different cases, with and without the contribution of the selectable intermediate. So, you see, a necessity model is always different from a random model, even if we include it in a more general model that is mainly random. The necessity model defines explicitly logic relations of cause and effect, and reasons according to those definitions. The consequences of those relations are evaluated deterministically, even if they can be modulated by other random variables.
Well, if that is what you have done, then you have made a major error (and it's the same one made by Dembski, or one of them) which is that you haven't drawn your "cause-and-effect" NS relations from a probability distribution. In fact, even at it's simplest, when you have a single sequence with a specified selection coefficient (as in highly simplistic evolutionary models), what I do, at any rate, is express that as a probability of survival given the sequence, not as: all critters with this sequence survive, all the rest die. But that's ignoring other huge sources of variance that contributes to the total pdf.
Calling all of that “stochastic” will not help, if you don’t analyze correctly the causal relations.
Indeed. But at least it allows us to pinpoint the some of those you have failed to model stochastically :) Elizabeth Liddle
The real problem with the concept of ID is that while the effects of stochastic change are selectable, they are not predictable. Petrushka
Eigenstate, I think you are conflating rules with laws. They are fundamentally different concepts. There's been a lot of confusion in scientific literature between the two. Citing those sources does not help solve the problem. The only way out is stop being sloppy with the definitions. Rules are imposed on top of physicality by intelligent agents. They are not changing this reality. Again, e.g. the rules of chess or badminton do not lead to any change in the physical conditions of an actual chess or badminton tournament. It is so simple, that I am sometimes puzzled as to why people do not understand such trivia. Your attempts to label this reasoning 'anthropomorphism' do not remove the problem. cheers. Eugene S
Humans violate many rules, that other types of designers may not want to violate. And the modalities of design implementation are obviously different.
Well, since you don't have any actual sightings of the designer of life at work, you can assign any capabilities and motives you can imagine, can't you?
The similarity between human design and biological design is the design itself:
But the design itself looks exactly like descent with modification. Try introducing your immaterial cause hypothesis at a criminal trial. What's the point of hypothesizing an immaterial cause when you have an observable physical cause? The fact that the detailed history of life has been mostly erased does not support support science fiction scenarios. I know you are enamored of your virgin birth protein domains, but they really don't present a problem if you look at the math behind them. Petrushka
Elizabeth: No. NS is a necessity process, that intervenes on and is modulated by random processes. I have tried to make that clear in my modleing. NS is a necessity process because we can define a cause effect relationship between the variation and its consequences on reproduction. For instance, I have modeled the effect of the selection of a possible intermediate, but to do that I had to make specific assumptions on how the intermediate affected reproduction, and on how it was selected (for instance, assuming that the local function of the intermediates directly improved reproductive success, and that it could be otpimally selected. Then, once evaluated the necessity consequences of such a selected intermediate, I computed the difference in the global random system in the two different cases, with and without the contribution of the selectable intermediate. So, you see, a necessity model is always different from a random model, even if we include it in a more general model that is mainly random. The necessity model defines explicitly logic relations of cause and effect, and reasons according to those definitions. The consequences of those relations are evaluated deterministically, even if they can be modulated by other random variables. Calling all of that "stochastic" will not help, if you don't analyze correctly the causal relations. gpuccio
OK, just checking! That's fine. In which case, would you also agree that Natural Selection is a random process? (I'd use "stochastic" in both cases, for greater precision in English - it has fewer alternative meanings) Elizabeth Liddle
Elizabeth: Not again, please. I mean what I have always meant, as you should know, always the same thing: a system whose behaviour can best be described by some probabilty distribution, and not by a necessity model. gpuccio
Petrushka: Humans violate many rules, that other types of designers may not want to violate. And the modalities of design implementation are obviously different. Biological design is not implemented in artificail labs, like humans do, but directly in lioving things and in the living environment, and probably through a direct interaction between consciousness and matter. That creates different possibilities, and different constraints. The similarity between human design and biological design is the design itself: the input of information into matter from conscious intelligent representations. But the modalities of implementation of that information are obviously different. gpuccio
Excuse me, in english, I believe, “random variation” means just what it means: variation caused by random events.
The trouble, gpuccio, is that in English, "random" can mean a great number of different things. What do you mean by it, in this context? Elizabeth Liddle
Just about anything can be put into a nested hierarchy- it all depends on the criteria used. Descent with modification does not lead to a nested hierarchy based on characteristics as one would expect a blending of characteristics with dwm and nestred hierarchies do not allow for that.
Yes, it depends on the characteristics used. No, not all sets of items possess characteristics that can be used to place them in a rigorously nested, deep, hierarchy. Living things do, as noted by Linnaeus. Designed things don't. Features of designed things, not surprisingly, are constantly being transferred by designers from one design lineage to another. Which is why even cheap makes of car now have features developed in expensive lineages. Your second sentence is just wrong. Check out any paper on cladistics. In fact there are even online programs where you can run the algorithms yourself. Elizabeth Liddle
eigenstate: I don't know if you are really interested in a serious discussion (champignon evidently is not). If you are, I invite you too to read my posts here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/ (post 34 and following) and comment on them, instead of just saying things that have no meaning. You say: Once you understand this, that gpuccio isn’t even addressing evolutionary processes at all, that his metric neither addresses nor even attempts to consider evolutionary processes, but only looks at what he call “RV”, dFSCI can be apprehended for what it is and where it fits in the discussion (if anywhere). ‘RV’ was a stumbling point for me, because that expands to “Randome Variation” in my mind, where variation implies *iteration* as in evolutionary processes of inheritance with variation across reproductions. For gpuccio (and I understand English may not be his first/primary language, and I certainly couldn’t converse in his native language at this level if the tables were turned!), “RV” is really “Random Combination” or “Random Shuffling”. That is both wrong and unfair. I have addressed evolutionary processes in great detail. Read and comment, if you like. And yes, english is not my primary language at all, but I don't believe there is any mystery or misunderstanding in the way I use RV. It measn random variation, exactly as you thought. But I really don't u8nderstand what you mean when you say, in what I hope is your prmary language, that: "variation implies *iteration* as in evolutionary processes of inheritance with variation across reproductions.". Excuse me, in english, I believe, "random variation" means just what it means: variation caused by random events. In biology, the meaning of random variation is very clear: any variation in gene sequence that is cause by ranodm events. I prefer the term to "random mutation" because RM could be identified only with single nucleotide mutations, while RV clearly ecompasses all the random mechanisms of sequence variation, including indels, chromosome rearrangeements, shuffling, and anything else. The only mechanism not included in RV is NS. As you can verify if you read my posts, I have modeled both RV (Using the concept of dFSCI and NS. Whatever you folks may like to repeat, dFSCI is very useful in modeling the neo darwinian algorithm. So, if you want to discuss, then please address my real points. Otherwise, go on with your unilateral expressions. You are in good company. gpuccio
champignon: I have abswered your "arguments". You go on repeating the same things, without adding anything, and wothout addressing any of my points. I have invited you to comment about my detailed modeling of RV and NS. You haven't. I have nothing more to say to you. I wish you good luck. gpuccio
At the risk of being repetitive - then again, what is not getting repeated? For the sake of argument I'm setting aside the numerous contradictions in such hierarchies. Let's just say that all living things fit neatly into nested hierarchies. It's absurd to cite the hierarchies as evidence while ignoring that pretty much every single life form that makes them up defies explanation by any of the mechanisms cited as the reasons why they are arrangeable in hierarchies. Put another way, darwinian mechanisms predict a hierarchy. They do not predict avian lungs, bats, insect metamorphosis, human intelligence, or really anything else placed in those hierarchies. So merely pointing out that there are nested hierarchies is cherry-picking for confirming evidence. Everything that tells you what you want to hear is confirmation. The contradictory evidence is just a temporary uncertainty, a gap. You've already decided what must fill it, so the contradictions can be safely ignored, and you can assume that whatever fills them will meet your expectations. ScottAndrews2
Just about anything can be put into a nested hierarchy- it all depends on the criteria used. Descent with modification does not lead to a nested hierarchy based on characteristics as one would expect a blending of characteristics with dwm and nestred hierarchies do not allow for that. Joe
Comparative genomics as well as comparative anatomy can be used as evidence for a common design, which is something we observe and have experience with. What we need is some way to take fish, for example, perform some targeted mutagenesis on fish embryos, and get them to start developing tetrapod characteristics until they eventually become tetrapods- studying the mutation- phenotypic effect relationship(s) along the way. And if we cannot perform such a test then what good is the claim of universal common descent seeing it would not have any practical application at all? Might as well be philosophy... Joe
We also have direct evidence that designers do not create nested hierarchies. ID advocates love to cite humans as creators of dFSCI, but they are reluctant to acknowledge that humans routinely violate the rules of descent with modification. Not just in the creation of mechanical devices. When humans engage in genetic engineering they tend to copy entire genes across phyla, even across kingdoms. We have evidence of horizontal gene transfer in nature, but we do not have evidence of the kind of manipulations done by human agriculture and by people engaged in medical research. We have, for example, evidence that viruses can insert genes into the human genome, but we see no natural occurrence of human genes, like insulin, inserted into bacteria. This "backwards" transfer would cause one to suspect design. Particularly if it violated Darwin's rule of thumb, that evolution would be violated by a characteristic in one species that only benefits another species. Moving right along, there are potential kinds of forensic evidence that would support design. Our successors or descendants will be able to look at the evidence and see where we have tampered with genomes in our engineering efforts. Petrushka
It all about gaps, isn't it? Darwin asserted common descent and predicted gap fillers before there were any hominid fossils, before there were any whale fossils, before there were any fossils blending the characteristics of birds and reptiles. He even asserted that evolution could occur at different rates at different times and places, anticipating punk eek. (The evidence for smooth evolution is much better at the molecular level than at the level of bone length. Small changes in regulatory sequences can make dramatic differences in the size and shape of animals, as can be seen in dogs.) Now we can do comparative genomics as well as comparative anatomy, and we can create and test gap fillers at the molecular level. You cannot "prove" history. You can only say, as police detectives do, that this hypothesis requires that certain things had to have been possible. Each time you find one of those required things, you support the hypothesis. Jury trials do not provide logical proof that a particular hypothesis is correct. They just look at evidence and decide if a conclusion is sufficiently warranted to deprive a person of life or freedom. Juries routinely do this with far less evidence than we have for evolution. Evidence gets erased over time. You will never find the videotape of evolution. You will never find a fossil for every species that ever lived. We can't even find fossil evidence for passenger pigeons. Petrushka
Petrushka, One has only to search this very page to find your references to Thornton 'doing research' on the subject. To paraphrase gpuccio, that's great that he's doing research. What has he found? I understand that when your conclusion is assumed, any research in the area looks like progress. If Thornton had found evidence that distinct proteins or any other such functionality were connectable by an evolutionary search, that would not be "interesting." It would be the one of the most significant discoveries in biology, ever. The hardest part would be announcing it without anyone noticing that the ship had already sailed decades ago without anyone bothering to wait for such evidence. How do you "prove" what everyone says they already know? ScottAndrews2
That's why direct evidence of connectability is so interesting. Such as Thornton's. Petrushka
Champignon,
If the functional space is connected and all life is related by descent, then we should see a single nested hierarchy.
Chas was clear that the evidence for evolution (descent) is not evidence of any particular mechanism at work. You appear to be saying the opposite. A nested hierarchy can be used as evidence of descent. (I'm not addressing at this point how well this particular hierarchy does so.) It is not evidence that the various items in the hierarchy are connectable by any particular mechanism such as variation and selection or gene duplication with variation. The capacity of those mechanisms to connect items in the hierarchy (or whether they are so connectable - same question, different words) requires its own evidence. You cannot smuggle it in with nested hierarchies. Determining that B descended from A does not indicate how or why the variations between A and B came about. It does not follow that any given mechanism must have been the cause. You're trying to sneak in a conclusion where it isn't warranted or even supported while ignoring evidence to the contrary. ScottAndrews2
Btdand: blockquote>I also fail to see why, once there is a system of replication with variation, some random event process such as gene duplication followed by mutation can’t result in a new function – thereby incrementing the repertoire of functions in that genome. I believe such things have been documented. How was it determined that a gene arose via stochastic processes, was duplicated and modified via stochastic processes? You need that first. Joe
Yes, biological evolution by design. Joe
So GA's are easily implemented; can be used to vary, search, select, and improve; and they are not explicitly required to produce useful results. A bit like biological evolution, then? Bydand
kairosfocus attempts to defend gpuccio:
Again, it has long since been pointed out that the differential reproductive success leading to culling out of less successful variants SUBTRACTS information in the Darwinian algorithm, it does not ADD it... What adds information, if any, is the chance variation.
Which is quite sufficient. Suppose a particular allele has become fixed in a population. There is a random gene duplication followed by random variation of one of the copies. The variation confers a selective advantage and becomes fixed by selection. You now have a net increase in the information contained in the genome (which is why Upright Biped and nullasalus were afraid of answering Nick Matzke's question about gene duplication in another thread). Variation plus selection together have increased the information content of the genome. Since gpuccio freely acknowledges that this happens, I'm not sure why you are arguing otherwise in his defense.
Of course there is a long debate on how we can assume a continent of function across the domain of the tree of life, but in fact there is little or no actual empirical support for that. The debate boils down to the advocates of darwinian macro evo want there to be such, assume life forms MUST have arisen by that sort of thing and demand the default.
Evolutionary biologists don't claim "continents" of function. They just claim that the functional space is connected. The evidence for that is massive:
1. If the functional space is connected and all life is related by descent, then we should see a single nested hierarchy. 2. Analysis of the evidence confirms this to an astounding degree of accuracy: the monophyly of life has been confirmed to better than one part in 10 to the 2680th power; the nested hierarchy of the 30 major taxa has been confirmed to better than 1 in a trillion trillion trillion trillion. 3. Design does not predict a nested hierarchy.
To argue that for disconnected "functional islands", you would have to argue that
1. The islands are far enough apart that evolution can't jump the gaps, but close enough together that a nested hierarchy is still produced. 2. The designer avoids the trillions of ways of designing life that would not produce a nested hierarchy, and insists on designing in a way that produces a nested hierarchy and is therefore indistinguishable from evolution.
If you want to make those ridiculous claims, be my guest.
He is then entirely in order to draw the inductive inference that such dFSCI is an empirically reliable sign of design as cause.
dFSCI would support such a conclusion only if you could show that dFSCI cannot be produced by evolution. Even gpuccio acknowledges that dFSCI does not take evolution into account:
As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately.
champignon
Hi eigenstate, I wrote: "Petrushka, have you ever tried to convert your genetic algorithm into another type of algorithm/heuristic?" You responded,
Some types of GAs produce algorithms and finite automata directly; that is what the “animals” are, in some GA implementations. Tierra, for example, works at the instruction level for a virtual machine. That means that what gets created are “programs”, discrete configurations of instructions that consume memory resources and CPU cycles. In those kinds of implementations, such GAs are a kind of “mother of all algorithm generator”.
(I'll assume that you didn't intend to answer my question, but just used it as a springboard for providing your own thoughts on the efficacy of GAs.) Let's be clear that GAs usually make use of heuristic variations that are rewarded by a fitness assessment, in accordance with desired outcomes. This is intelligent variation merged with intelligent selection. Genetic algorithms map inputs to outputs by way of intelligent selection and variation acting upon probabilistic necessity. They are a special case if iterative improvement, and are desirable for their simplicity. But citing a genetic algorithm as a demonstration of what "evolution" can accomplish is begging the question.
But even so, we are optimizing in those cases a design we’d never have come up with on our (human) own. The brute force got us very close to something we can make good use of, and we take that innovation and “put the frosting on”.
It's unclear what sort of "innovation" you're referring to, and whether you're crediting the GA with innovation or optimization, and whether the "frosting" is optimization or innovation. It wouldn't surprise me if you credit a GA with innovation, and human intelligence with the frosting, especially considering I take the exact opposite view. Consider a GA intended to optimize a shovel, perhaps the head or even the handle. Now it's quite possible that a shovel could be optimized by a GA, as well as by other methods. But the GA will only produce shovels, it will not innovate an excavator -- EVER -- or any other type of unexpected, configurationally unrelated device. It will output variations which are explicitly dependent on a priori established parameters. The very reason a GA works is because it can be programmed to take shortcuts corresponding to specific functional requirements, not because it magically traverses vast expanses of unexpected configuration spaces, to produce novel, never-before expected results. The output is a direct, necessity-driven result dependent upon variation parameters and initial state.
That produces a different kind of asset. It’s GA-generated, for the most part, but human tweaked to form a kind of functional hybrid.
"GA-generated" implies a novelty otherwise completely unknowable from the beginning. Computation saves us time, it doesn't do its own innovating. Algorithms and heuristics, along with the computers which run them, automating the finding of solutions in a defined space, GA or not, are examples of artifice. Computers are glorified calculators. They do exactly what they're told. Their tasks can all be carried out, step by step, directly by a human being, although it's computationally impractical to do so. We employ algorithms, "genetic" or otherwise, to automate with speed, by making use of computers, a marvel of human design, engineered for the very purpose of automating intelligent tasks at a clip. I wrote, "Do you think it’s possible to construct a heuristic that can generate comparable output with higher efficiency?" You replied,
Depends on what you mean by efficiency.
I'll take that as a "yes." I'm happy to stick with time complexity, f(x) = O(g(x)), as the fundamental measure of an algorithm's efficiency (perhaps with some consideration given to memory footprint).
There is a profound insight into the aphorism “nature is the most clever designer”. Natural process are unthinkably slow, and terrifyingly expensive in terms of resources consumed, but because they are not human, and not bound to human limitations of patience, persistent and creativity, they are demonstrably more efficient than human designs because they are immensely scalable.
This is begging the question. The question is whether natural processes, independent of the clockwork mechanics of living systems, can produce the systems under consideration. Ascribing the integrated sophistication and operation of a DNA-based self-replicator to material processes -- to "evolution" -- is begging the question, if by "evolution," we mean, "what living systems can do." There is no theory which can even hope to span the gap at this point, from physicochemical interactions in matter, to living systems. I'm reserving the right to remain unimpressed with a putative force, defined as "evolution," which presupposes the system it attempts to explain.
If you intend “shortest route in time and resources to a workable solution”, for many targets, humans are more efficient, and by many orders of magnitude. Humans a “forward looking” simulation capability that can accomplish not just a couple, but many integrated steps that are staggeringly difficult to arrive at in an incremental, stochastically-driven search.
Humans are more efficient at certain things, simply because humans can solve difficult problems, such as path finding, by way of conscious intelligence (itself a seemingly intractable phenomenon). Where these abilities are amenable to quantification or modeling, they can be converted to heuristics or algorithms, and optimized by taking advantage of sophisticated calculators (computers). GAs produce novel solutions, much the same as hammers and nails produce structures. A GA is a hammer; it can do nothing on its own, but relies on the skill of the builder.
So humans are much more efficient in one class of solution finding. And they are absolutely pathetic compared to impersonal, mindless, incremental processes that don’t care about anything at all, and thus will embarrass humans when it comes to brute force methods for solutions.
Again, this is question begging. We have no empirical indication that mindless methods can do much of anything with regard to innovating the staggering sophistication present in living systems, even in deep time. Also, computation doesn't embarrass humans -- it's an edifice of human intelligence, even if the computation includes exploring pre-defined spaces by generating pseudo-random variations of the initial product, for the purposes of optimization. Remove intelligent variation and selection from a GA, and what's left is a true brute force search, truly random and utterly inept at traversing any but trivial spaces.
For that class of solutions, humans are useless, and brute force, scalable methods (like evolution) are vastly more efficient in creating effective and durable designs.
I'm not sure which class of solutions you refer to, for which humans understand nothing of the solution space or the problem being modeled.
This is one reasons why ID strikes so many scientists as a conceit. Once you understand the tradeoffs, what impersonal, brute force search processes are really good at and what human schemes are good at, observed biology is decidedly a “brute force” product. As glorious as humans are, it’s a folly to think that kind of intelligence can compete with the mind-numbing scale of deep-time, vast resources, and stochastic drivers that just… never… stop. If there is a feedback loop in place (which there is), humans are great at local, small, and highly creative short cuts, but are wannabes at macro-design, designs that adapt, endure, thrive over eons.
Conceit is calling a computer a natural process, then proclaiming that natural processes produce computers, without an empirically verifiable proposed mechanism. If one is going to assess the provenance of living systems by material causes, one should invoke processes extrinsic to the configuration of the system, instead of appealing to the capabilities unique and specific to the system's sophisticated construction. Deep time does not rescue intractable searches through vast combinatorial spaces. A 256 bit key lies in a space of around 1.2*10^77. In 4.6 billion years of 10^20 attempts per second, only about one out of every 8*10^39 sequences will have been tested. 256 bits is about equivalent to a 54 character alphabetic string such as this one: "abcdefghijklmnopqrstuvwxyz_abcdefghijklmnopqrstuvwxyz_" The above string will simply never be found by a blind (random) search. Nor will it likely be found by searches which make use of linguistically common features of language in their variation and selection. The above can however readily be found by a heuristic that intelligently presupposes a lexicographical sorting of sequences where 'a' < 'b' < ... < 'z' < '_'. Such is the case with any search through a space of that size. A genetic algorithm, or alternate heuristic, must be able to cull the 2^256 space to a manageable size, via initial, intelligent, parametric optimization. It's the intelligence required to establish this parametric control which should impress. I presume that not many processes modeled by a GA depend inherently on the GA itself, but rather on the heuristic/algorithmic processes of targeted variation and intelligent selection, which effectively cull the search space. I think it's generally the case that a GA is replaceable by other, more efficient and direct methods of heuristic iterative improvement. This is not to say that genetic algorithms can't be used to vary, search, and select -- and thereby improve -- only that they are not explicitly required to produce results that have concrete, real-world applications. GAs are desirable because they are simple to implement. Where computation is expensive, such as in realtime, other approaches will likely be favored. Even where GAs may be uniquely helpful, they are likely non-critical. From The Algorithm Design Manual 2nd Edition, by Steven S. Skiena, Section 7.8: Other Heuristic Search Methods, page 267,
Take Home Lesson: I have never encountered any problem where genetic algorithms seemed to me the right way to attack it. Further, I have never seen any computational results reported using genetic algorithms that have favorably impressed me. Stick to simulated annealing for your heuristic search voodoo needs. (original emphasis) © Springer-Verlag London Limited 2008
Best, m.i. material.infantacy
That humans can design, and produce objects exhibiting very high levels of dFSCI, is a fact (not a theory).
It's neither a fact nor a theory that humans can design coding sequences for proteins or for regulatory networks without using some variety of evolution. Not has it been demonstrated that humans can design a completely novel sequence without using evolution and GAs. Not only do you have no candidate for the designer of life (except in your imagination), you have no precedent for the design of a complex biological molecule that is not derived from an existing one, with variation. Once you conceded that "intelligent" selection can assist in designing molecules and sequences, you conceded that functional space can be traversed by incremental change. The "top down" portion of your scenario simply translates to copying what is already known to work. There is no theory that assists in designing new sequences from scratch. Petrushka
In that case, I will attempt to limit my appearances here to those threads not hosted by you; asking only that you read the paper I commended to your attention - and one of Axe's, to wit Proc Natl Acad Sci U S A. 1996 May 28; 93(11): 5590–5594. Active barnase variants with completely random hydrophobic cores. I should be interested in your comments on both. Bydand
But your own repeated question – how do we know that the spaces aren’t connected – indicates that you don’t know that they are. That’s the whole question.
Most of evolutionary biology is directly or indirectly concerned with this question. It's only been in the last ten years that we have the technology to address it directly and experimentally. I suspect we will see more experiments like Thornton's that will directly probe the connectability of cousin sequences. Petrushka
@Eugene S#41.1.2.1.5,
Excellent. Only one more step to take. What you now have to show is nature using rules not laws in anything other than life, which is the case in point in this discussion. Rules are arbitrary and are independent of physical reality.
They are? That's quite a curious distinction. See, for example, this about.com entry on the laws of physics: http://physics.about.com/od/physics101thebasics/p/PhysicsLaws.htm
ir Isaac Newton developed the Three Laws of Motion, which describe basic rules about how the motion of physical objects change. Newton was able to define the fundamental relationship between the acceleration of an object and the total forces acting upon it.
(my emphasis) Do you suppose those "rules of motion" are "arbitrary and are independent of physical reality"? No matter, in any case, it's not important which word we use, we just need to be pointing at the same concept so we can communicate. The key feature of a code is that is CONSTRAINED and DETERMINISTIC such that for any given input symbol X, an associated output Y is produced. This concept doesn't depend on, or care at all about whether the rules/laws/constraints are brute facts of physical law (like the "rules about the motion of physical objects"), or rules a programmer just pulled out of thin air on a whim. The provisioning of the rule is not relevant to its status AS a rule (er, production, recipe, mapping, suggest your own preferred term that fulfills the conceptual requirements).
No matter how we define the moves of a knight in the game of chess, it will have the same gravitational pull.
Yes, but what you are describing here IS a rule. It's just a natural rule, one you can't change, and you didn't create. It's consistent, predictable, symmetric. It's as "rulish" as a rule can get!
Nature does not care about rules.
Whoa. Setting aside the problematic anthropomorphic language inherent in "care", there (nature doesn't "care" about anything, it's impersonal, in my view), the only thing Nature DOES care about are its rules. That's what science does -- reverse engineers the rules of nature. Nature "cares" about them so much, and enforces them so thoroughly, that we cannot breach them, not even a little bit, not ever, so far as we can tell (religious superstitions notwithstanding, of course).
The only sensible scenario where rules are present is intelligence using the rules. In other words, whenever we see rules at play, we may be sure that the scene has been set up by intelligent players.
I have no idea what you mean by "sensible", there. That's an arbitrary distinction, some sort of ad-hoc criteria, special pleading. A rule is a rule, if it operates as rule. Where you can change it, or created it, or like it is perfectly immaterial to its status as a rule. eigenstate
41.1.2.1.1 "The code is just a rule". Excellent. Only one more step to take. What you now have to show is nature using rules not laws in anything other than life, which is the case in point in this discussion. Rules are arbitrary and are independent of physical reality. No matter how we define the moves of a knight in the game of chess, it will have the same gravitational pull. Nature does not care about rules. The only sensible scenario where rules are present is intelligence using the rules. In other words, whenever we see rules at play, we may be sure that the scene has been set up by intelligent players. Eugene S
So as soon as you have a copying system you have an information transfer system, ergo you have a code.
I was referring to the RNA/DNA translation code. Of course regulation of development is not easily classified as deterministic, because the outcome is modified by the environment during development. In my simple way of thinking, that makes design by foresight rather difficult, and design by cut and try more likely. I would certainly like to see a proposal from the ID community as to how a designer would avoid cut and try with regulatory genes. Petrushka
Well, by that definition, replication came first. But I don't think it's a very useful definition. Elizabeth Liddle
eigenstate: Ehm... It was meant to be: "I am not fastidious about the meaning of words usually". gpuccio
eigenstate: I am fastidious about the meaning of words usually: the only important thing is that we clarify waht we mean by them. Still, I have the impression that you are strecthing a little bit the usual meaning of the word "code" here. Wikipedia defines it as follows: "A code is a rule for converting a piece of information (for example, a letter, word, phrase, or gesture) into another form or representation (one sign into another sign), not necessarily of the same type." So, in a strict sense, just copying the information as it is would not be a code. But if you want to define code any form of mapping, even of a thing on itself, I have nothing against it. Agin, the important thing is to agre on the meaning. Regarding OOL, I believe that the only code that is relevant is the genetic code, the mapping of a sequence of nucleotides (be it RNA or DNA) to proteins sequences. That has to be a code, because AAs are not nucleotides, and even if in the beginning the code could have been different (although we have no evidence of that), a code had anyway to be present, if information stored as nucleic acid was ever converted to protein information. So, the problem would be, was it replication without a code for proteins, or the opposite? I believe that replication without a code for proteins could make sense only in: a) A protein first OOL (very unlikely: no information to be propagated). b) The RNA world scenario (which IMO is as unlikely, but certainly more trendy. c) Some other form of ill defined primordial replication, like in metabolism first scenarios, or other weird hypotheses (and ill defined is definitely an euphemism). In all other cases, it would have to be: code first, than replication. Which, I believe, is even more problematic. That's why, being stupid and very simple, I believe it was: code and replication at the same time. gpuccio
@Joe, You must be pulling Elizabeth's leg with that request, but in the case you are not.... A "straight copy" is the most basic mapping there is:
Source Target ============ 'a' => 'a' 'b' => 'b' 'c' => 'c' 'd' => 'd' ... [space] => [space] '?' => '?'
This is the most straightforward code there is, simply copying the source data to the target. If we look at a ROT13 code, we get map like this:
Source Target ============ 'a' => 'n' 'b' => 'o' 'c' => 'p' 'd' => 'q' ... 'x' => 'k' 'y' => 'l' 'z' => 'm' ... [space] => [space] '?' => '?'
If we want to use a code that maps chars to binary strings:
Source Target ============ 'a' => '01100001' 'b' => '01100010' 'c' => '01100011' 'd' => '01100100' ... 'x' => '01111000' 'y' => '01111001' 'z' => ' 01111010' ... [space] => '00100000' '?' => '00111111'
Maybe we want to use Morse Code:
Source Target ============ 'a' => '*-' 'b' => '-***' 'c' => '-*-*' 'd' => '-**' ... 'x' => '*--*' 'y' => '*-**' 'z' => ' **--' ... [space] => ' ' '?' => '**--**'
These are very basic codes, different ways to map input symbols to output symbols. Given a source symbol, there is a deterministic output for an output symbol. That is, it's a code. The cases above are bidirectional codes; given an output symbol, we can deterministically produce the input symbol. Not all codes are like that -- C++ compilers translate text based instructions that humans can read into machine code; the original text of the source code cannot be produced going the other way, from machine code to source (which is a great thing for many developers who wish to protect their intellectual property). A code is just a rule for conversion or translation. Trees encode historical weather data into their tree trunks, producing a code that we can (and do) use to obtain information about the age of the tree, and the climate dynamics it has experienced in its life. Some codes are human designed and exist just for human purposes, others are just the effects of brute physics, translations and isomorphisms produced by the interactions of matter and energy according to physical laws. eigenstate
So as soon as you have a copying system you have an information transfer system, ergo you have a code.
Could you please provide a reference for that? If I am copying a paper verbatim, what code am I using? If I copy that same paper but want to encrypt it, then I apply some code. Joe
Re "replication preceded the code". Replication, by definition, involves information transfer. If there is a copy of something, it contains straightforward information about the thing it is a copy of. So as soon as you have a copying system you have an information transfer system, ergo you have a code. And as soon as you have a slightly unfaithful copying system in which the copies vary in their capacity to re-copy themselves you have both "RV" and "NS", and therefore adaptive evolution. Elizabeth Liddle
Ch: Again, it has long since been pointed out that the differential reproductive success leading to culling out of less successful variants SUBTRACTS information in the Darwinian algorithm, it does not ADD it. Recall: CV + DRS/aka NS aka survival of the fittest --> DWM aka evolution What adds information, if any, is the chance variation. The problem there is that such can only work in a context where we have small functional increments, leading generally uphill in an island of function. Of course there is a long debate on how we can assume a continent of function across the domain of the tree of life, but in fact there is little or no actual empirical support for that. The debate boils down to the advocates of darwinian macro evo want there to be such, assume life forms MUST have arisen by that sort of thing and demand the default. Not so, in science, the default is that major claims need to be empirically warranted per observation. In fact, as has been pointed out in general, once function depends on specific arrangement and co-ordinated combination of component parts, this tends strongly to isolate functional configs in the space of possible configs. By far and away, most possible configs are gibberish and non functional. Tornadoes in junk yards do not by happenstance assemble D'Arsonval galvanometer based instruments, much less 747s. At molecular scale, thermal agitation etc will not credibly assemble ATP Synthase, nor will accidents in a genome credibly create the code for such -- indeed it takes ATP to make things go! Similarly, at gross body plan level, the way we get from a bellows lung to an avian one way flow air sac lung, is not going to be by incremental changes, absent observational evidence that shows such in details. Or for that matter any number of similar cases at body plan level. Proteins of course come in isolated folds, and Gould summarised how fossils come in by sudden appearance, show stasis, and then vanish. Relatively small genome random changes are often grossly damaging or outright lethal; to the point where we have a civilisation wide phobia about radioactivity and cancer etc. (Most people don't seem to realise that radioactivity is all around us, e.g even the friendly local banana is a significant source. In the days when I regularly played with GM tubes, we expected a 15 count per minute background, at least here in the Caribbean. of course, the point is that significant excess is dangerous, but we need to have much more level headed discussion about radioactivity.) In short the empirical evidence strongly supports the existence of islands of function in biological forms also. So, the problem is that the dominant school of thought cannot explain how new body plans beyond the FSCI threshold arise, and has tried to impose itself as a default. GP is entirely correct to point out that dFSCI is an empirical reality in life, in DNA and so also proteins etc. he is entirely in order to point out the implied config spaces and search challenges, he is entirely in order to point out that on widespread empirical observation, the known and only known, routine source of dFSCI is design. Posts in this thread are cases in point, for example. He is then entirely in order to draw the inductive inference that such dFSCI is an empirically reliable sign of design as cause. Case of coded information, prescriptive information, and underlying algorithms, as well as the communication system implied by that. He is then entirely within his scientific, epistemic rights to point to cases in living forms and infer this is best explained on design. That this cuts across the worldview preferences of the dominant evo mat school of thought is simply a statement about their preferences, not about what the warrant points to. There is, and has always been, excellent reason to infer that life forms point to design. GEM of TKI kairosfocus
Hi eigenstate,
I think it was you who pointed out the other “key” for me (in addition to Dr. Liddle’s insight about the “F=1 | F=0? thing) on this. gpuccio is offering dFSCI as a cryptogram challenge.
It was actually Petrushka who made that analogy.
dFSCI, then is only relevant in a “pre-evolutionary”, OOL context, where incremental evolutionary processes have not yet been reified. That’s certainly gpuccio’s prerogative to muse about the random shuffling he supposes is necessary in abiogenesis, prior to evolutionary/reproductive biology, but that’s the box it must live in, by his own design.
Actually, dFSCI can't be used to rule out random shuffling even in an OOL context, unless it is shown that there is only one 'target' -- a single type of primordial replicator that could kickstart evolution. Otherwise you have the problem of 'retrospective astonishment' all over again, where you compute the probability of hitting the target that you actually hit, rather than the probability of hitting any target that could have kickstarted the process. gpuccio has dug a deep hole for himself. champignon
@champignon,
I feel exactly like eigenstate:
That’s frankly outrageous — dFSCI hardly even rises to the level of ‘prank’ if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out…
You have an obligation to make it clear in future discussions that dFSCI is utterly irrelevant to the “designed or evolved” question. In fact, since dFSCI is useless, why bring it up at all? The only function it seems to serve is as pseudo-scientific window dressing.
I think it was you who pointed out the other "key" for me (in addition to Dr. Liddle's insight about the "F=1 | F=0" thing) on this. gpuccio is offering dFSCI as a cryptogram challenge. Once you understand this, that gpuccio isn't even addressing evolutionary processes at all, that his metric neither addresses nor even attempts to consider evolutionary processes, but only looks at what he call "RV", dFSCI can be apprehended for what it is and where it fits in the discussion (if anywhere). 'RV' was a stumbling point for me, because that expands to "Randome Variation" in my mind, where variation implies *iteration* as in evolutionary processes of inheritance with variation across reproductions. For gpuccio (and I understand English may not be his first/primary language, and I certainly couldn't converse in his native language at this level if the tables were turned!), "RV" is really "Random Combination" or "Random Shuffling". Or, better as you put it, a cryptogram -- a random guess at some ciphertext encrypted with a random key. This only obtains, at most, to questions about abiogenesis. It cannot possibly be relevant to phenomenon addressed by evolution, because evolution NOWHERE invokes such a shuffling process. It is an incremental process, and neither predicts, nor sees a practical way for such "luck" to obtain. dFSCI, then is only relevant in a "pre-evolutionary", OOL context, where incremental evolutionary processes have not yet been reified. That's certainly gpuccio's prerogative to muse about the random shuffling he supposes is necessary in abiogenesis, prior to evolutionary/reproductive biology, but that's the box it must live in, by his own design. And given what we know, even vaguely about the probabilities and phase spaces for abiogenesis -- which is virtually nothing -- such calculations aren't even worth the bandwidth it takes to send them over teh interwebs. gpuccio would be world famous (er, maybe he is, and simply hasn't told us!) if he had any grounding for his probability calculations for abiogenesis. Barring such stupendous breakthroughs, dFSCI is totally vacuous, not even a shot in the dark as a measure of the probabilities he claims. Forget being specific, he's not even got the numerators and denominators ROUGHLY approximated, not even to within some orders of magnitude, plus or minus a lot, offer a probability calculation with a straight face. eigenstate
@material.infantacy# Petrushka, have you ever tried to convert your genetic algorithm into another type of algorithm/heuristic? Some types of GAs produce algorithms and finite automata directly; that is what the "animals" are, in some GA implementations. Tierra, for example, works at the instruction level for a virtual machine. That means that what gets created are "programs", discrete configurations of instructions that consume memory resources and CPU cycles. In those kinds of implementations, such GAs are a kind of "mother of all algorithm generator". In my past work, I (we, my team and I) have taken novel designs produced by GAs doing their brute force search and getting close to something we see as being (commercially) valuable, and we "tweak" it to push it to a more optimal position, given our goals. Since the GA is brute force, and relies on stochastic inputs to explore the search space, this is a time-saving measure for us, as we'd otherwise have to wait for it to sniff around on its own to get there (and even small adjustments where we'd like to see it go can take a VERY long time -- anyone whose worked with GAs will be perfectly clear on how "blind" they are in that respect. But even so, we are optimizing in those cases a design we'd never have come up with on our (human) own. The brute force got us very close to something we can make good use of, and we take that innovation and "put the frosting on". That produces a different kind of asset. It's GA-generated, for the most part, but human tweaked to form a kind of functional hybrid.
Do you think it’s possible to construct a heuristic that can generate comparable output with higher efficiency?
Depends on what you mean by efficiency. There is a profound insight into the aphorism "nature is the most clever designer". Natural process are unthinkably slow, and terrifyingly expensive in terms of resources consumed, but because they are not human, and not bound to human limitations of patience, persistent and creativity, they are demonstrably more efficient than human designs because they are immensely scalable. If you intend "shortest route in time and resources to a workable solution", for many targets, humans are more efficient, and by many orders of magnitude. Humans a "forward looking" simulation capability that can accomplish not just a couple, but many integrated steps that are staggeringly difficult to arrive at in an incremental, stochastically-driven search. So humans are much more efficient in one class of solution finding. And they are absolutely pathetic compared to impersonal, mindless, incremental processes that don't care about anything at all, and thus will embarrass humans when it comes to brute force methods for solutions. For that class of solutions, humans are useless, and brute force, scalable methods (like evolution) are vastly more efficient in creating effective and durable designs. This is one reasons why ID strikes so many scientists as a conceit. Once you understand the tradeoffs, what impersonal, brute force search processes are really good at and what human schemes are good at, observed biology is decidedly a "brute force" product. As glorious as humans are, it's a folly to think that kind of intelligence can compete with the mind-numbing scale of deep-time, vast resources, and stochastic drivers that just... never... stop. If there is a feedback loop in place (which there is), humans are great at local, small, and highly creative short cuts, but are wannabes at macro-design, designs that adapt, endure, thrive over eons. eigenstate
@ScottAndrews2#40.2.1.1.8,
That’s pretty much the point. By defining in advance what the increments are, the programmer is defining the space searched. If I program a Roomba vacuum to sweep adjacent flat surfaces, I determine that it will sweep the floor on this level. And it never surprises me. If I program a GA to plan a route, I know exactly what I’m going to get – a route. There are no surprises in store. The GA plans moves from one city to the next because those are the increments it has to work with.
That's precisely what GAs do NOT generate. One of the common learning examples in GAs is the "traveling salesman program", and GAs are valuable for this kind of solution; TSP is notoriously difficult for humans to work out efficiently. So if you deploy a Roomba with a GA with back propagation that selects for efficiency, you will not be able to predict what it does; yes, it's moving across the floor, but that's no different than saying "evolution will obey the law of physics". You cannot anticipate the "design" for the local maxima for traversal routes, and this difficulty becomes more acute (the human gets worse and the GA solution gets better in terms of comparative solutions) as the complexity of the room's geometry increases. If you are not surprised by what a GA-powered Roomba does, that's a strong clue you do not understand what is happening with GAs.
It’s understandable how a GA improves upon a design and solves problems in a defined space. But how does one use a process to design something when its vast creative power depends on its lack of a target?
A GA will ALWAYS have some kind of "target", although "target" is a problematic term, prone to cause conceptual stumbling. There is always a fitness function, which does not entail either a single discrete target of any kind, or that goals or optima are known beforehand or static. Which just means that when a GA produces a new iteration of candidates, there is a feedback loop. This is why it's called a "genetic algorithm", as that concept is lifted right out of observed biology. "The environment" is the target for living organizations, measured by survival and propagation in that environment. The environment is dynamic, constantly changing. But a GA, like evolution does not need a design to improve upon. It can (and often does) begin with de novo trials, generated by a randomly selected starting point. In biology, the environment, or more precisely, the laws of physics and the energy/matter operating within it are the "defined space". That is the box that the real biological "genetic algorithm" of evolution operates in. It is a "given" and it is highly constrained, just as the floor, wheels and navigating machinery are "givens" in a GA-powered Roomba, and it operates in a highly constrained framework (the geometry of the room and its nav and sensor capabilities) in which it explores a search space. People who struggle with this concept often object to "having a target", and that's an area where casual terminology really gets in the way. GAs, like evolution, do not depend on a preset or static target. They just depend on feedback that is used to determine what candidates in its trials are biased for preservation (and further mutation) and which are not (if any, sometimes the feedback is not sufficient to discriminate, and nothing changes until either random changes in the child population trigger a disposition either way, or the environment changes, triggering a selection bias). eigenstate
gpuccio, By your own admission, dFSCI is useless for ruling out the evolution of a biological feature and inferring design. Earlier in the thread you stressed that dFSCI applies only to purely random processes:
As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately.
But evolution is not a purely random process, as you yourself noted:
b) dFSCI, or CSI, shows me that it could not have come out as the result of pure RV. c) So, some people have proposed an explanation based on a mixed algorithm: RV + NS.
And since no one in the world claims that the eye, the ribosome, the flagellum, the blood clotting cascade, or the brain came about by "pure RV", dFSCI tells us nothing about whether these evolved or were designed. It answers a question that no one is stupid enough to ask. Yet elsewhere you claim that dFSCI is actually an indicator of design:
Indeed, I have shown two kinds of function for dFSCI: being an empirical marker of design, and helping to evaluate the structure function relationship of specific proteins.
That statement is wildly inconsistent with the other two. I feel exactly like eigenstate:
That’s frankly outrageous — dFSCI hardly even rises to the level of ‘prank’ if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out...
You have an obligation to make it clear in future discussions that dFSCI is utterly irrelevant to the "designed or evolved" question. In fact, since dFSCI is useless, why bring it up at all? The only function it seems to serve is as pseudo-scientific window dressing. champignon
B: Every post under that name (which is no coincidence) is already highly questionable mockery; as you full well know or should know. (FYI, just using that name puts you for cause at strike two on threads I host. Any significant misbehaviour on your part will lead to a direct request never to post on any thread I host. Indeed, already, I would prefer that if you wish to post on threads I host, you get another handle. So, you have been served your only and final notice.) As to your O/T, again, we see there something within an island of function. The issue -- in the face of the abundant evidence that multipart function dependent on alignment and arrangement [topology] is just that: tightly constrained in the field of all possibilities -- is to arrive at shorelines of function for novel body plans, not to move about a bit within such. You can start with how does an enzyme or protein end up in folds, and how do we end up with an architecture of life based on such island of function, energetically massively unfavourable molecules, assembled step by step in machines that follow algorithms that show them to be robotic work cells. KF kairosfocus
Petrushka: I have answered those points many times. the fact that you don't agree does not mean they have not been answered. To answer a point is possible, but to convince you is probably more difficult than to traverse an infinite search space! :) First of all, thatnk you for at least saying this: Worst case scenario, it would be the change apparent from nearest cousin species. I notice gpuccio uses this better metric. I object to his assumptions, but at least he has avoided the common error of using entire sequences. It is a small admission, but coming from you it is certainly appreciated. About your second point, I have expressed many times my positions. In brief: a) A design inference can and must be made, when appropriate, even if we know nothing of the designer, and even if we have no final theory of what design is and of how it works. b) That is no excuse, however, for not trying our best to find reasonable answers to those points. c) That the designer may be a god, even omniscient, is a possibility that cannot be ruled out a priori. It is not, however, the only possibility. Whta Behe, or I, or you believe is not pertinent. d) That intelligent design by "non god" entities, like humas, is possible, is so obvious that I don't understand why you find so many problems about it. Again, my dear Wikipedia page about probability distributions is a good example. Hamlet is another. Windows 7 (!) is another. All complex mechanical machines are good examples. That humans can design, and produce objects exhibiting very high levels of dFSCI, is a fact (not a theory). That they can do so because they have conscious, meaningful, purposeful representations is a very good theory. e) That "non god" consious entities cannot traverse huge search spaces is thereofre simply wrong. We may discuss how they do that. We may believe that they cannot traverse "any" possible search space. Those are all controversial problems, about which it is certainly legitimate to debate constructively. f) Finally, it is obvious that one powerful tool (but not certainly the only tool) that human designers use if the targeted use of RV coupled to Intelligent Selection. That this type of intelligent search by random variation (what you call a GA) can accomplish much is perhaps one of the few things about which we agree. I certainly agree that it can often (but probably not always) find the function that it has already defined. I definitely disagree that it can find anything about which it has no definite, powerful enough, added information. gpuccio
Petrushka: I realize that this is not chemistry and not biological evolution, but it is a mathematical demonstration of how variation and selection can incrementally increase utility without converging on a predetermined target. It is a mathemathical demosntration of how variation and intelligent selection can incrementally increase utility converging on a predetermined, intelligently defined, concept of utility. IOWs, that RV + Intelligent Selction can attain waht IS has chosen to obtain. That is what all GAs are. Examples of design. It is not just avoiding consonants, it has a statistical model of the characteristics of English and several other languages, and can steer the population toward sequences that are more wordlike, without having a target word. As everybody can see, it is Intelligent Selection. It has not a target word. It has a target type of word. And it finds exactly the target that it has defined. It is not different, in essence, from the infamous Weasel (although certainly smarter, but that is not difficult). I don’t follow you. What prevents biological evolution from inventing new things? It can invent new things. But not new complex functional things (things that have a new complex function). And the greatest, essential, insurmountable limit of neo darwinian evolution is that itis based on NS, not IS. IOWs, there is no contribution of intelligent understanding (meaning) and of intelligent purpose. gpuccio
Petrushka, have you ever tried to convert your genetic algorithm into another type of algorithm/heuristic? Do you think it's possible to construct a heuristic that can generate comparable output with higher efficiency? material.infantacy
What, me enter a contest competing against all those whiz kids and mathletes? No, not anytime soon. For any good TSP solution attempt, I think it is decidedly less critical for the attempt to make use of a GA over other approaches, such as approximation algorithms (greedy algorithms, minimum spanning trees, pairwise exchanges, etc.). Would you agree? I think we could generate good solutions, or near-optimal solutions, better (faster, more efficiently) without a GA, than without, for example, LKH (Lin-Kernighan heuristic). IOW, if a GA is not making use of an existing set of heuristics, you're back to O(n!) search times, AFAICT. On the other hand, if we implement LKH, or closest pair, or nearest neighbor, or minimum spanning trees, without a GA, we'll fare far better than the GA alone. material.infantacy
GAs do what they do, and do it well. But you seem to have this wild idea that one day they will take on a life of their own and begin innovating beyond their programming.
I don't follow you. What prevents biological evolution from inventing new things? Say, for example, the inner ear bones? Petrushka
GAs cannot search any space. They can search any space that can be connected by incremental change.
That's pretty much the point. By defining in advance what the increments are, the programmer is defining the space searched. If I program a Roomba vacuum to sweep adjacent flat surfaces, I determine that it will sweep the floor on this level. And it never surprises me. If I program a GA to plan a route, I know exactly what I'm going to get - a route. There are no surprises in store. The GA plans moves from one city to the next because those are the increments it has to work with. GAs do what they do, and do it well. But you seem to have this wild idea that one day they will take on a life of their own and begin innovating beyond their programming. As far as I can tell you think this because you think it's what happened in biology. And yet you hope to use it to provide the evidence of what can happen in biology. You are trapped in your circle. By your own admission, nothing can release you except someone proving that it's impossible, which no one can. It's understandable how a GA improves upon a design and solves problems in a defined space. But how does one use a process to design something when its vast creative power depends on its lack of a target? If you want a specific function then you must set it as the target. But a GA or any other form of evolution must work on smaller increments of change. Otherwise you are hoping to "poof" function into existence. The only way to use evolution to reach a target is to do some of the work for it and start it closer to its target. And that is exactly what GAs do. Invent the salesman, the product, the road, and the transportation and the GA can swoop in at the end and make it efficient. But it's pure denial to see a salesman traveling efficiently between cities selling products and attribute any significant part of that functional activity to the GA. ScottAndrews2
@44.1.1.1.2 Kairosfocus, I entirely fail to see any mocking in my post. I also fail to see why, once there is a system of replication with variation, some random event process such as gene duplication followed by mutation can't result in a new function - thereby incrementing the repertoire of functions in that genome. I believe such things have been documented. Disagreement is not mockery, my friend On a slightly different subject, you may be interested to read Proc Natl Acad Sci U S A. 2000 May 9; 97(10): 5095–5100. "DNA polymerase active site is highly mutable: Evolutionary consequences" Bydand
GAs cannot search any space. They can search any space that can be connected by incremental change. They are neither trivial nor tautological. I wrote this because someone a few years ago claimed that a GA could not generate ten letter words. My goal was to write one that could not only generate ten letter words, but do so without a specific target. It is not just avoiding consonants, it has a statistical model of the characteristics of English and several other languages, and can steer the population toward sequences that are more wordlike, without having a target word. Welsh: QYZSHERHSE QYSSHERHSE GYSSHERMSE GYSSAERMSE GYSSAERWSE GYSSEERWSS GYSSEERUSS GYSSETRUSS GYSSEURUSS GYSSEURASS French: QYZSHERHSE OEISHERHSE OEICHERHSE AEICHERHEE ALICHERPEE ALICHEREEE ALICHEREUE ALICHEREUR ALICHERAUR ALICHERASR German: QYZSHERHSE YZSHERHSET YZSHERISET YZSHERIGET GISHERIGET CISHERIGET EISHERIGET RISHERIGET WISHERIGET SISHERIGET Spanish: QYZSHERHSE QYZSTERHSE YZSTERASED ZSTERASEDM ZSTERASENT ZUTERALENT ZLTERALENT VLTERALENT VNTERALENT KNTERALENT Given 50 or a hundred generations it can make ten letter words. But it can also invent words. It can produce sequences that make sense but are not in the dictionary. There are actually people who get paid by corporations for doing this. The goal is new tradenames. As for arranging words with proper grammar, that can be done (although not by me). As for useful thoughts: they are rare. Almost extinct. :) Petrushka
Also, letters of the alphabet are by their very nature pronouncable. Even the most random string of letters could be pronounced if you really tried. Finding combinations that don't include three consecutive consonants is rather trivial compared to finding words with meaning, which in turn is trivial compared to arranging words with proper grammar, which is trivial compared to arranging grammatically correct sentences with actual meaning and accuracy, which is trivial compared to forming thoughts to express in words, which is still easier than forming new, useful thoughts. ScottAndrews2
is a mathematical demonstration of how variation and selection can incrementally increase utility without converging on a predetermined target.
Can you clarify what the selection is? Why would ZYZSHERASE get selected? I understand you are saying that the other words are selectable on the basis that they are almost pronouncable. But lowering the standard for selection to what your GA is able to meet doesn't show anything. Every GA ever written demonstrates that a GA is pretty much by definition capable of searching whatever space it was written to search. But what is the point in writing a GA that does nothing special at all and then setting the goalposts for success right within its demonstrated range? ScottAndrews2
gpuccio has made detailed arguments to demonstrate that no convincing explanation has ever been given of how feature X could have evolved.
Detailed, but not convincing. It still boils down to the fact that you have unilaterally decided that evolution is impossible. Among your unconvincing arguments is you assertion that because a sequence has no living cousins it must have had no parents. Your argument boils down to asserting that protein domains are virgin births. You simply don't understand how variation and selection can produce a child with no living cousins, but the possibility is easily demonstrated. I ran my word generator three times starting with the same seed. In only nine generations I have children that are mostly unrelated to the seed and to each other. They are beginning to look like pronounceable English words, but they are not converging on a target like Weasel. QYZSHERHSE ZYZSHERASE ZYASHERASE ZYASPERASE YASPERASED YASPERATED YASMERATED YAOMERATED VOOMERATED QYZSHERHSE QYLSHERISE YLSHERISEN PLSHERISER PLSTERISEL PYSTERISEL PYSTERISEW PASTERISEW PASTERISEM QYZSHERHSE QYZSHERMAE QYESHERMAE QYESHERIAE QYESHERIZE QLESHERIZE QUESHERIZE MUESHERIZE MUESHERINE I realize that this is not chemistry and not biological evolution, but it is a mathematical demonstration of how variation and selection can incrementally increase utility without converging on a predetermined target. Plus, it demonstrates that sequences can quickly diverge in many directions, and quickly depart from the ancestral sequence, and from each other. The problem with your argument is that you have decided a priori that this cannot happen in biology. You do not have the history of the sequences. You have no actual evidence that there is not an incremental path to modern sequences. It's just what you claim. Petrushka
That's where I have to stop myself, because I know I'm not really qualified to discuss ID in depth. Only to shoot off about evolution. But regarding this:
The folding game has been mentioned, and I admit it is possible for humans to beat early generation GAs at some things. But the GAs will be improved. It’s primarily a matter of tweaking the sources of variation.
The rising tide lifts all boats. Or however that goes. Not long ago the technology didn't exist to enable human designers to play with proteins like they were video games. You use the example of checkers. Chess is also a good example. Comparing them to the task of putting together functional proteins, it's noteworthy that inexperienced humans are beating the GAs even though they haven't had 5 or ten years or a lifetime to develop strategies. They don't have great opponents to compete against. Don't forget, it takes a really good chess player to beat the computer. But in this case the computer is getting beat by beginners. Why bet that GAs will improve but that beginners won't get better at something? ScottAndrews2
I may be wrong, but I don't believe I intended to use the word censored. At least not in the official sense of the word. What I see every day is proclamations from the rank and file that ID does not discuss the identity or attributes of the designer. I get this when I ask for some conceptual framework for design, some demonstration that design is possible by a non-omniscient being. I have noticed that several regular posters (plus Behe) have said they believe the designer is God. If true, that certainly answers my questions. But that isn't the common position of posters here. I see several major conceptual problems with ID. The first is that ID advocates calculate probabilities based on the length of entire coding sequences rather than on the length of mutational changes occurring from one generation to the next. Worst case scenario, it would be the change apparent from nearest cousin species. I notice gpuccio uses this better metric. I object to his assumptions, but at least he has avoided the common error of using entire sequences. The second major conceptual problem is assuming that "intelligence" can somehow navigate a search space better than an genetic algorithm. This may be true for naive algorithms, but Koonin and Shapiro have highlighted the fact that evolution uses very sophisticated algorithms, with modes of variation that can leap across the valleys of function. These kinds of variation have been modeled in sophisticated industrial GAs. They work, and they are getting better. They can, for example, beat most expert human checker players after starting with no knowledge of the game other than the rules. And these are in their infancy. They will only get more sophisticated. The folding game has been mentioned, and I admit it is possible for humans to beat early generation GAs at some things. But the GAs will be improved. It's primarily a matter of tweaking the sources of variation. But unless you have feedback regarding differential reproductive success, you have no ability to design or tweak living things. The difficulty in even knowing that life is possible is inherent in the skepticism about origin of life research. The fact that ID advocates are skeptical about the success of such research indicates that knowing how to assemble simple replicators (or even knowing it is possible) would require something beyond intelligence. It would require a level of intelligence that we attribute to deities. So it is my opinion that ID stands as a coherent idea if it admits (as Behe does) that the the designer is God. If it does not admit this, and does not demonstrate, at least in principle, that design by finite beings is unlikely, then it is vacuous. I realize I have raised other questions about my position, but the post is already too long. Petrushka
Petrushka, Have you not just changed the subject, completely retreating from your clearly worded and then repeated assertion that the question you asked is somehow censored?
Actually it is essential to know the capabilities of the agent that you are claiming to have created and maintained life.
By this statement you indicate again that you lack even a basic comprehension of the concept you are attempting to argue against. It is transparent that regardless of any attempts to explain it to you, you will cast ID as what it is not, because your argument against is founded on your misunderstanding of it. Do you realize that it is possible to formulate arguments against an accurate understanding of ID? It's possible. Others have done it. It just takes a little more work.
You already know and admit this because you demand it of evolution.
Yes, I demand that evolution explain exactly what it claims to explain. Should I not? Why not? What does ID have to do with the answer to that question? Pointing the finger at ID doesn't make the question go away.
Since evolution is increasingly able to fill in details of how large changes occur and how new structures are invented
It follows that if it can fill in details that it can fill ina detail. The converse it true. If it cannot fill in a detail, it cannot be said to fill in "details." In light of your above statement, is it unreasonable to ask how evolution 'fills in a detail of a how a large change occurs or how a new structure is invented?' You did use the word "detail," and probably wish you hadn't. ScottAndrews2
Petrushka, We know the capabilities of unknown designers by studying the design they left behind. And we know the capabilities of "evolution" by what we observe and test. Unfortunately there isn't anything that says accumulations of random mutations can construct anything of note. And that is why it is vacuous. Thanks fer playin'... Joe
Actually it is essential to know the capabilities of the agent that you are claiming to have created and maintained life. You already know and admit this because you demand it of evolution. Since evolution is increasingly able to fill in details of how large changes occur and how new structures are invented, it is silly to maintain a science fiction fantasy that aliens or whatever come to earth every million years or so to drop in a new protein domain. I note the change of tone in recent weeks regarding Koonin and Shapiro. Suddenly Dembski has noticed that they are not supporting intervention and not supporting foresight. They are describing how variation and selection can build complex new things. Until ID can describe a process that implements foresight and does not require cut and try, it is vacuous. Petrushka
Nearly every thread has someone asserting that ID says nothing about the attributes or behavior of the designer, and must not be asked to do so.
The first part of that sentence is mostly correct, with the exception of the "I" in ID. The second part - "must not be asked?" Or what? William Dembski will float through my bedroom window at night and haunt me with his arms stretched out like in his old Wikipedia picture? I think you have personally asked the question about a million times. Have your posts been deleted? You can ask as many times you want. But your own words quoted above indicate that you already know the answer. It's actually a very simple answer. Why you would want to ask a question over and over when the answer is very simple and you provide it yourself in the same sentence as the question is beyond me. Both the question and your wording of it suggest that you don't understand what you are asking about and are willfully determined not to, so why even ask? But let's put it to the test anyway and see if I get censored: Someone, please tell me what ID tells us about the attributes and behavior of the designer? (Assuming, as one should not, that ID refers specifically to the design of one thing and/or one specific designer.) ScottAndrews2
I think I’ll hold to the view that an increment is an increment, whatever its nature, type, or source.
The English word covers both meanings, so that's not inaccurate. In this case the word has a specific meaning in a certain context and a drastically different meaning in another. Even then I wouldn't split hairs, but when we're talking specifically about how biological evolution operates and someone compares the increments of that evolution to the addition of new functions in software the hair-splitting is called for. If someone asks "How does evolution make bird wings" someone else might correctly point out that evolution is about genetic changes, not specifically how entire new functions get created and added. That's why it's astounding that, just to make a point, someone arguing the other side would directly compare not the result of evolution but its actual mechanical process to the adding of entirely new functions. ScottAndrews2
Petrushka,
But the very word design is a metaphor or an analogy.
I read your posts with interest because I think they exhibit rationale and good thinking. However, honestly, I think this phrase shows a big weakness in your argumentation. IMO, you are locking yourself up from understanding the unique role of choice contingent causality in nature. A whole lot of reality cannot be adequately understood without it. Eugene S
Onlookers: I simply note that increments face a key threshold barrier, complexity in the context of specificity. If the "increments" in question are functionally specific per a reasonable objective warrant, and are beyond 500 - 1,000 bits, blind chance and mechanical necessity cannot credibly -- per empirical observation -- account for such. Design can. Most significant software or editorial changes to works of consequence pass that threshold. Similarly, OOL requires at least 100,000 bits of prescriptive info de novo in a code, and novel body plans 10 - 100 millions. If that is the "step size" of the "increments," that is tantamount to saying: materialist magic. The mocking nom de net is showing a sock puppet character this morning. KF kairosfocus
P: Re: I see no 747 aircraft in the biological world. I see no CPUs, no software. We see birds and bats that put the 747 to pale, we see ribosomes that are NC machine factories executing digitally coded prescriptive, algorithmic info strings, we see the mRNA and DNA tapes that store digitally coded data. Your response is amazing, utterly and inadvertently revealing! GEM of TKI kairosfocus
SA well said. P, nope that old "analogies breakdown" red herring does not hack it in this context. SA is pointing tot he key dis-analogy, between what Darwin's mechanisms [as updated] would have to do to have probabilistically meaningful steps, and what we know algorithms and code to implement do. Please respond on-point.KF kairosfocus
Onlookers, watching the exchange between GP and Ch, is enough to tell me that it will be all but impossible for Ch to acknowledge some fairly obvious things: 1 --> There are three well known causal patterns in the empirical world, chance necessity and/or agency. Way back in 1970, in fact Monod wrote a book that in the English version, bore the title "Chance and Necessity" as in, design need not apply. This last echoes a discussion that goes back to Plato in The Laws Bk X, on the three causal factors. 2 --> Each of these has characteristic signs, and the three may be present in a situation, so per aspect we can tease out relevant factors. Necessity is marked by lawlike necessity tracing to blind mechanical forces. 3 --> A dropped heavy object near earth falls at 9.8N/kg initial acceleration, reflecting gravity, a force of mechanical necessity. This is reliable and of low contingency. That's how we identified a natural law at work, and an underlying force of gravitation. 4 --> If the object happens to be a fair die, the outcome of falling has another aspect, high contingency: it tends to tumble and come to read a value from 1 - 6, in accord with the uncorrelated forces and trends at work, and hitting on the eight corners and twelve edges, so leading to in effect a chance outcome. 5 --> Anything that has that sort of high contingency, statistically distributed occurrence in accord with a random model is similarly chance driven. In experiments, there is usually an irreducible scatter due to chance effects, which as to be filtered off to identify mechanical necessity acting. Already, two prongs of the ID explanatory filter are routinely in action in scientific contexts! 6 --> Now, if we look at how say a blocks/plots-treatments-controls experiment is done, we see that we have ANOVA at work, and we are looking to identify the effects of the ART-ificial intervention of manipulating a treatment in blocks and degrees. We want to detect the effects of designed inputs, chance scatter and underlying laws of necessity. Again, routine in science. 7 --> More generally ART often leaves characteristic traces, such as functionally specific complex organisation and associated information. The text of posts in this thread is sufficient to show this -- dots are organised in ways that bear info, which is functionally specific and complex. 8 --> Routinely, we do not infer to chance causing the highly contingent outcome, but design. And, it has been shown over and over, that the resources of the solar system or the observed cosmos are inadequate to explain a specific, complex outcome on chance and necessity without design. 9 --> In short, as has been pointed out over and over and willfully ignored or brushed aside:
a: Origin of Ligfe has to explain the rise of the language, codes, algorithms and machines involved in a von Neumann self replicator joined to a metabolising automaton b: absent such, no root for the darwinian tree of life c: in addition, this is the first body plan, and it shows, abundantly, how the resources of chance and necessity on the gamut of solar system or observed cosmos are hopelessly inadequate. d: best explanation for OOL, design, i.e design is on the table here, and of course again at the level of a cosmos that is fine tuned for life. e: despite howls and objections, design is a credible candidate possible explanation and must be reckoned with in the context of explaining OOL and OO body plans etc, as such exhibit FSCO/I which is well known to be produced by design. And ONLY observed to be done by design, ga'S being a case in point. f: So, to arbitrarily impose the Lewontinian a priori materialism objection is to beg the question. g: When we must explain more complex body plans, we face much higher levels of FSCO/I involved, i.e. we now face having to account for de novo origin of organ systems and life forms, where it is only the chance variation in CV + NS --> DWM, aka Evolution, that can be a source of information. h: This is predictably object5ed to but the point is that the selection part is in effect that some inferior varieties die off in competition with the superior ones, so INFO IS SUBTRACTED by NS. i: but believe it or not, some still want to assert -- happens at UD all the time -- that subtraction equals addition. Own-goal. Oops. j: So, we have to explain millions of bits worth of functionally specific complex info, on chance variation, in a context where until the millions of bits are in place, the systems are non-functional. k: the bird lung is emblematic: a bellows lung works, and a one way flow lung with sacs works too, but intermediates do not work and are lethal in minutes. l: So, on observed -- actually observed -- cases, how do we get from the one to the other, or how do they arise independently in the first place, by CV + DRS, i.e. differential reproductive success --> DWM (descent with mod, unlimited)? m: Failing this, what cases of such observed macroevo giving rise to body plan components like this, do we have? n: failing such, what do we have that empirically shows CV + complex Functional selection [not by inspection for hitting an easy-reach target like three-letter words] off superior performance + replication with incremental change --> FSCO/I, where we are not playing around within an existing island of function? o: NOTHING p: In short, we have every reason to see that FSCO/I -- rhetorical objections notwithstanding -- is a good, empirically reliable sign of design
10 --> Here comes the "how dare you appeal to agency" rebuttal. ANSWERED. 11 --> So, now we have a good sign that reliably points to cases of design, and we have circumstances that point to non-human designers and non-dinosaur designers, etc. So, why not simply take that seriously? 12 --> All of t5his has been pointed out, explained and even demonstrared oer and over again, but ther eis a clear roadblock. It is ideological, not scientific: a priori materialism. 13 --> So, we are back full circle to where Johnson was in 1997 when he pointed out what has gone wrong with origins science thusly:
For scientific materialists the materialism comes first; the science comes thereafter. [[Emphasis original] We might more accurately term them "materialists employing science." And if materialism is true, then some materialistic theory of evolution has to be true simply as a matter of logical deduction, regardless of the evidence. That theory will necessarily be at least roughly like neo-Darwinism, in that it will have to involve some combination of random changes and law-like processes capable of producing complicated organisms that (in Dawkins’ words) "give the appearance of having been designed for a purpose." . . . . The debate about creation and evolution is not deadlocked . . . Biblical literalism is not the issue. The issue is whether materialism and rationality are the same thing. Darwinism is based on an a priori commitment to materialism, not on a philosophically neutral assessment of the evidence. Separate the philosophy from the science, and the proud tower collapses. [[Emphasis added.] [[The Unraveling of Scientific Materialism, First Things, 77 (Nov. 1997), pp. 22 – 25.]
14 --> "the materialism comes first . . ." 15 --> So until the proud tower collapses in ignominy, we will keep on seeing the sort of arguments in a materialist circle that will not listen to evidence, that we see year after year here at UD. 16 --> But, in the meanwhile, let us insist: until you show the capability of darwinian mechanisms to achieve body plan origin level results, the whole is an ideological enterprise once we move beyond things as minor as finch beak sizes or the difference between North American Elk and European red deer. Given the interbreeding in New Zealand, are they still classified as separate species? GEM of TKI kairosfocus
champignon: Maybe my assumption is right. Maybe my readers don't understand the difference between: "1. gpuccio has decided that feature X could not have evolved." and: "1. gpuccio has made detailed arguments to demonstrate that no convincing explanation has ever been given of how feature X could have evolved. He has many time pointed to those arguments and to those posts. And some specific reader of mine has never commented on that. Moreover, if that specific reader about whom I am, according to some, making assumptions (maybe justified), were kind enough to read what I have written, he would probably (but sometimes I am too optimistic)understand that an explicit calculation of the probabilities of a random event in relation to the probabilistic resources of the system that is supposed to have generated it is the basis to scientifically analyze any model implying the random generation of that event; and that the explicit proposal of adding some explicit necessity mechanism, like NS, to the model can be quantitatively integrated in the model, and still allow quantitative evaluations of the global probabilities of the final event, as gpuccio has shown in those long, and evidently not read by some, posts. So, you go on with your statements, I will go on with my assumptions. We live in a free world (more or less). gpuccio
Ch I have already taken your argument apart step by step, pointing out the strawmen. KF kairosfocus
34.1.3.1.11 Petrushka You wanted examples. Today the Western culture is becoming increasingly homosexual. We are all drifting towards hell. Instead of tolerance to this phenomenon (for want of a better word), schools and universities should educate people that this is psychiatric disorder rather than the norm. Take a medical dictionary that was printed say in the 1950-s and compare (I may be wrong as regards the dates because I don't know maybe the situation in the US was already bad then, so I guess you want a foreign reference). I hope you will see there quite adequate descriptions of the case as perversion, a medical case. I ask you, is it possible to openly discuss this issue in class in any US university today without fear of the consequences (at least in the form of a public disclaimer in boldface on the door to your university room)? Eugene S
Petrushka: As you should know, I have never said that. Just the opposite. It is perfectly correct, however, that it is not necessary to know anything about those things to make a design inference. gpuccio
You make it sound like some sort of weird hooded order that meets in catacombs. Are you sure you’ve read the FAQ?
It's not hidden. It's out in the open. Nearly every thread has someone asserting that ID says nothing about the attributes or behavior of the designer, and must not be asked to do so. Petrushka
Thank you. I think I'll hold to the view that an increment is an increment, whatever its nature, type, or source. If a new gene is added to a genome, the complement of genes has undergone an increment.If a new function arises, the repertoire of functions has been incremented. And so on. Of course, incremental change is decidedly not the ONLY thing going on, and it doesn't define evolution. But, it seems to me, it can be an important component of evolution Bydand
Ok. It was asserted that evolution, a process of selected incremental changes, can evolve protein sequences, and in that context it was mentioned that changes to software are also incremental. In this context it cannot be mistaken that incremental changes to one are being compared to incremental changes to the other. Evolution is a change in frequencies of alleles in the gene pool of a population. One could say that it in practice it is genetic variation and selection, but if I say that someone will correct me with this definition. But it doesn't matter. Although evolution is commonly cited to explain why elephants have trunks or why spiders make webs, evolution is (supposedly) not about explaining such functions. It is about the propagation of specific genes, which supposedly, maybe add up over time to those functions. Genes or alleles are the primary increments of evolution. (Other factors, such as environment, may play a role. People grow taller with better nutrition, but elephants don't grow trunks because of better nutrition.) If giraffes descended from tapir-like creatures then the differences between them are the result of a number of incremental genetic changes. How or why they add up that way is an open question, or at least that's what people say when you point out that selection and variation are insufficient. But that's beside the point. Genetic changes are the increment. Any noticeable "incremental" change in software such as a new feature or even a bug fix consists at the very least of the addition of complete complex instructions, and usually more elaborate functions. It may be that the occasional bug fix requires fixing a one-keystroke typo, but one does not develop software by changing single characters. Forget randomness. Even if you know exactly what you're doing you can't write or enhance software by taking existing software and replacing, deleting, or adding a single character. In a nutshell: The incremental changes of biological evolution are genes, not functions. The incremental changes of software are complete instructions and new functions. Ironically, if I say that biological evolution is about the appearance of new functions, I am certain to be corrected. I have said it, and I've been corrected. So if someone arguing for the power of biological evolution to discover new function equates or even compares the incremental changes of such evolution to the appearance of entirely new functions in software either - they think that new functions are the increments of change in biological evolution, which means that they don't understand it well enough to argue for it - they think that software developers write new software by poking at individual bytes - we don't even write plain text that way so I find that hard to believe - or they are begging the question in the most egregious manner, using the assertion that biological evolution produces new functions like software development as evidence of itself. ScottAndrews2
gpuccio, You are assuming a remarkable stupidity on the part of your readers. Your argument boils down to this:
1. gpuccio has decided that feature X could not have evolved. 2. The probability that the predefined function of X could have been found by a blind search is very low [as everyone, including 'Darwinists', has always agreed]. 3. Therefore, X could not have evolved.
In case the weakness of that argument escapes you, let me elaborate a bit. How to use dFSCI to determine whether feature X could have evolved, according to gpuccio:
1. Compute the 'quantitative functional complexity' of X, otherwise known as the negative log base 2 of the probability of finding the target zone of X using a blind search. Nobody thinks that blind search is responsible for X, but do the computation anyway. If X is so incredibly simple that it actually could have been found by a blind search, the QFC will be low, so drop the claim that it was designed. 2. Look at X and decide that it could not have evolved. 3. Redesignate the QFC as 'dFSCI' and conclude that X did not evolve.
In other words, if you think that X didn't evolve, and if X is not so simple that even a blind search could have found it, then conclude that X didn't evolve. The actual QFC number means nothing, unless you were stupid enough to claim that an extremely simple feature must have been designed. Then, and only then, the number would tell you to drop your claim of design, as if you weren't smart enough to figure that out without the calculation. So a low QFC number can cause you to drop your claim of design, but a high QFC number can't tell you that something was designed unless you have already concluded that it could not have evolved. The number itself means absolutely nothing in that case. champignon
I have the same problem with design that designers have with evolution. How did it start? Who was the first designer who foresaw the possibility of life?
Fair enough. How does that advance evolutionary theory?
I can understand why this question is forbidden in the ID movement. But it strikes me as a kind of censorship.
You make it sound like some sort of weird hooded order that meets in catacombs. Are you sure you've read the FAQ? That's like saying that discussion of the effects of weightlessness on humans is forbidden in astronomy. It's not forbidden. It's not even completely irrelevant. But if you persistently assert that it's an astronomical question than people will rightfully ask if you know what astronomy is. ScottAndrews2
then please enlighten me - I seem to be missing the thrust of your argument. Bydand
Actually, drug companies do use directed evolution to find new molecules. Computers might be cheaper, but they are going to use GAs, even if some humans have a temporary advantage. Humans used to beat computers at chess and checkers. I confused by your interpretation of "utility." I'm not thinking of simple targets like protein design. I'm thinking of differential reproductive success, most of which is determined by variations in regulatory networks. But it's also possible that the most beautifully structured protein may not be the most useful. It depends on context, and for living things, the context is reproductive success. I have the same problem with design that designers have with evolution. How did it start? Who was the first designer who foresaw the possibility of life? I can understand why this question is forbidden in the ID movement. But it strikes me as a kind of censorship. Petrushka
They are blind, but they grope the nearby space efficiently.
No one questions the gropability of nearby space. It's been groped before. But your own repeated question - how do we know that the spaces aren't connected - indicates that you don't know that they are. That's the whole question. 'Outlining the concept of an algorithm' just doesn't mean anything. People do that all the time where I work. Then they jump to another project and leave someone else with the concept they outlined. ScottAndrews2
champignon: dFSCI, as you compute it, considers only blind search. It does not consider evolution. There’s a very easy way to see this: come up with a formula for the probability of hitting a target by blind search. Express it in bits by taking the negative log base 2. What do you get? Exactly the same formula you presented for computing dFSCI. By considering only blind search, you are assuming that the probability of evolution is zero. But that’s the very question we’re trying to answer! I can onlt restate what I have said, and you seem not to understand: "Point 1 is wrong, because to affirm dFSCI we must have considered all known necessity explanations, and found them laking." I have considered all known necessity explanations and found them lacking. If you had the patience to read my posts in the other thread, we could maybe discuss, and not only waste our time. I understand it is a deep concept for you, buy what I am saying is that the computation of dFSCI must be accompanied by an anlysis of known necessity explanations. If those explanations do not exist, or are found to be wrong, then the random origin remains the only alternative explanation, and it can be falsified by dFSCI. But perhaps I am asking too much from your understanding: such complex concepts... But then the “quantitative functional complexity” part doesn’t do anything. All the work is done by the purported “empirical falsification of proposed necessity mechanisms”. The dFSCI number is irrelevant. Have you lost your mind? The falsification of necessity mechanisms rules out necessity mechanisms. dFSCI falsifies the random origin explanation. Again, what is wrong with you? How many times must I say trivial things? I don't pretend that you agree, but why not understand what I am saying? But the only thing you are quantifying is the probability of hitting a predefined target using blind search. Evolution is not a blind search, and it does not seek a predefined target. What I am quantifying is the probability that what happened happened in a random way. Again, it is not difficult. The neo darwinian algorithm relies on RV to generate new information. If you want to call it a blind search, be my guest. The problem of the predefined target is simply nonsense. I have asnwered it two or three times in the last few days, referring you to the previous asnwers, and you still repeat it like a mantra. If you want to waste time, it's your choice. I am here to discuss with people who are able to discuss, and want to do it. Nobody in the world thinks that the ribosome or the eye are the products of “simple RV”, without selection. You are answering a question that nobody is asking. dFSCI changes nothing. I am referring to those results for which there is no necessity explanation. Like the robosome, the eye, and basic protein domains. So we knoe they could not originate by simple RV, and that there is no credible necessity explanation for them. Therefore we infer design. Can you see how useful dFSCI is? And here is post 40.2 without the typos. I hope you are happier now: "champignon: dFSCI is a realiable indicator of design. What you seem to forget is that affirming that an object exhibits dFSCI, and therefore allows a design inference, implies, as clearly stated in my definition, that no known algorithm exists that can explain the observed function, not even coupled to reasonable random events. That’s why evaluating dFSCI and making the design inference is more complex, and complete, than simply calculating the target space/search space ratio. It includes also a detailed analysis of any explicitluy proposed necessity explanation of what we observe. Therefore, if correctly done, the evaluation of dFSCI allows the design inference, and answers your objections, because affirming dFSCI equals to say: we know no credible way the observed function could have evolved. As already said, I have analyzed in detail the credibility of the neo darwinian algorithm, including its necessity component, and found it completely lacking. Therefore, my belief that protein domains exhibit true dFSCI, and allow a design inference, is well supported." gpuccio
I realize that my tone is switching to cranky, so I'm going to have to drop this soon.
If modern living things have efficient methods for locating function (as Shapiro asserts) it is because search processes have been refined over billions of years.
So you say, begging the question. Wait - what? You didn't know that 'modern living things [us?] have efficient methods for locating function' until Shapiro said so? Apparently there is no bottom to this.
The most efficient way to design biological molecules will always be found in chemistry itself.
Okay, so why do they have a team of gamers working on protein inhibitors for the spanish flu? Why don't they just use chemistry instead? Excuse me, chemistry, may we please have some protein inhibitors for the spanish flu? Thank you.
The folding game doesn’t even address the most important design problem, that of utility. And it doesn’t address the most common and powerful kind of evolution, that of regulation.
The utilities they are targeting are relatively simple. But you are setting aside the massive point that they are accomplishing what GAs could not. This is your repeated assertion that evolution is superior to intelligence (except that evolution is intelligence - whatever, it's everything) put to the test, and intelligence is winning. This is a real-life demonstration of the opposite of what you keep saying. Why do I have no doubt that you'll keep saying it anyway? ScottAndrews2
Just to make myself clear, what causes you to doubt that evolution hasn't found efficient search algorithms? Shapiro has outlined his conception of such algorithms. They involve chemistry, not magic. They do not have foresight. They simply employ observable kinds of mutations and genomic change that maximize the potential for finding useful stuff. They are blind, but they grope the nearby space efficiently. Which is why the ID community seems to awakened to the fact that Shapiro may not be the ally they thought he was. Petrushka
When we present the problem of designing a protein as a cipher it follows that our brains do not process the problem particularly well. Few or none of us are wired to process numbers that way. But just because a problem poses the same complexity as a cipher does not mean that it must be processes as such.
Sure there is a spin. You assign magical attributes to a never observed entity, and biologists assign attributes to evolution. When the behavior of living things is studied in sufficient detail, the attributes are found. As with Lenski and Thornton. If modern living things have efficient methods for locating function (as Shapiro asserts) it is because search processes have been refined over billions of years. As for the folding game, it still takes enormous amounts of time to do something chemistry does in the blink of an eye. The most efficient way to design biological molecules will always be found in chemistry itself. The folding game doesn't even address the most important design problem, that of utility. And it doesn't address the most common and powerful kind of evolution, that of regulation. Petrushka
I’m aware that analogies have limits.
No one is saying that you are not. You are saying that comparing the increments of evolution to new features in software is within those limits. It's your understanding of what you compare because you compare them that I'm questioning, not whether you understand analogies.
So why not give up the design analogies, and critique evolution based on whether it posits any chemistry that cannot happen?
The comparisons between certain features within biology and those in human-designed systems are not analogies. Let's say I agree with you. Here goes: Evolution does not posit any chemistry that is proven impossible. And I'm 75% sure I actually mean it, even if I'm saying it for the sake of argument. If everyone agreed with that statement, how would that scientifically advance evolution? Would it not join a very, very long list of things - even contradictory things - that have not been proven impossible? ScottAndrews2
I'm aware that analogies have limits. are you aware of that? Why is the debate over ID littered with reference to human made designs or to human verbal abilities if analogies are forbidden? If ID advocates are willing to forgo all analogies and stick entirely to what chemistry can and cannot do, I'll go that route. But the very word design is a metaphor or an analogy. I see no 747 aircraft in the biological world. I see no CPUs, no software. I see chemistry. So why not give up the design analogies, and critique evolution based on whether it posits any chemistry that cannot happen? Petrushka
Surely “incremental change” means change by addition or accretion. It carries no baggage that I can see of the size or type of each addition.
Then I'm afraid you don't understand biological evolution either. I'm not trying to paint myself as an expert. I'm not even a biologist. But I do know that even the vaguest definitions of evolution, entirely separated from mechanics, describe exactly what the increments of change are. ScottAndrews2
Actually huge search spaces can be navigated, and usually more efficiently, by other algorithms and heuristics. I would hazard to predict that any ordered search space will be shown to be navigable by algorithmic methods at higher efficiency than a genetic algorithm.
Well there are contests for solving the traveling salesman problem with 10,000 stops. Feel free to enter, or feel free to locate an instance where the problem has been solved by other algorithms. this isn't rhetorical. I'm actually interested in what you might dig up. While you are at it you might tackle the problem of designing the traces for computer motherboards, or regulating power grids in real time. There's a dozen or so other industrial processes currently suffering under the yoke of Darwinism. What's interesting is your claim of foresight in the case of biological design. It really makes you wonder why most species are extinct. Petrushka
Regarding intelligent searching of large spaces - it made me think of how many times I've seen a child solve a Rubik's cube, which has 43x10^18 possible combinations. Not the same thing as folding a protein, but it nonetheless demonstrates that intelligence does not depend on random searches to solve even vast problems. I did some more digging and found a number of articles such as this. The problem was presented as a game. I'm not heralding this as the end-all-be-all, but it sure is interesting.
The way proteins fold depends on thermodynamic rules that are very time-consuming to calculate out by brute force, because there are many ways to fold them but only one configuration that’s correct. But by coding these basic rules into a game, and presenting the proteins as Rubik’s-Cube-like objects to fiddle with, crowdsourced players can find correct solutions faster merely by using their intuitions. The key to this success isn’t just scale--although with players generating nearly 200,000 enzyme designs for the researchers to test, that certainly helped. The real key is the gaming interface itself, which encourages players to try out designs that would be impractical in nature or too expensive in the lab. By manipulating the intuitive, cartoon-like shapes on their screens without a need to mind or even understand the "reality" of what they represent, players "can explore things that look crazy," another researcher told Nature News. And like innovation in any other space, the crazy stuff is often what breaks through to make progress on previously intractable problems.
I should have guessed - there's already been a thread on this site. The comments only stuck to the game itself for a short time. The responses amounted to question-begging along the lines of, 'Yes, intelligence can do that, but evolution is intelligent, too.' This is that very assertion put to the test, and it did not fare well. When we present the problem of designing a protein as a cipher it follows that our brains do not process the problem particularly well. Few or none of us are wired to process numbers that way. But just because a problem poses the same complexity as a cipher does not mean that it must be processes as such. Recasting a problem as a different one of similar complexity that requires computational skills we do not possess rather than those we may distorts the comparison between what intelligence and evolution can accomplish. Researchers can accomplish by harnessing intelligence what they could not by simulating evolution. It's demonstrated. And, unlike all the other GA stories we've heard, it even pertains directly to biology. Is there a spin for this? ScottAndrews2
Double secret proposals, I have to assume. I'm familiar with Behe's proposal. Others I have seen also involve non-material agents having magical powers. I have no problem if that is the claim, except that it is vacuous claim. It simply assumes the existence of something that has never been observed in action, which has no entailments, no boundaries, no limitations. I think science will stick to the drudgery of trying to find evolutionary explanations. Petrushka
Then you are mocking your own ignorance, champignon. Ya see extraordinary claims require the details. And seeing that no one has ever observed CSI arising via necessity and chance then it is up to the person making the claim that it can to demonstrate it, ie provide those details on how it could happen. ------------------------------------------------------------ Note to Petrushka- ID advocates are interested in the "how" and proposals have been mde. Joe
Surely "incremental change" means change by addition or accretion. It carries no baggage that I can see of the size or type of each addition. I can't see that Petrushka gave it that baggage. Bydand
Petrushka, I'm going to be a bit blunt.
I notice that commercial software also seems to change incrementally.
This statement alone disqualifies you from having even the slightest idea what you are talking about. If this is your understanding of incremental change, or if your understanding of incremental change can even include both it and incremental change within biology, then the simple, fundamental concept of incremental change eludes you. Forget about randomness for now. The idea of changing content in its smallest units is mostly irrelevant when developing software. Software does not begin with a byte. The increments of software development are fully-formed instructions and new functions. The idea of changing content in increments is absolutely central to any proposed evolutionary mechanism. If you see any commonality between varying a gene and adding a function to software (any worth mentioning in this context) then it would seem impossible for you to understand the proposed mechanics of evolution enough to argue for them or against them. Your arguments are pervaded with this fundamental miscomprehension of what incremental change is. This lack of of understanding is what enables you to see incremental change and evolution at work in everything, everywhere. Your repeated assertions that 'only evolution is known to do this' or 'evolution has the power to do that' are one and all rendered meaningless. Until now I thought you were stretching the meaning of the word "evolution," and I wondered why you bothered since everyone can tell the difference. Now I realize that you actually do not see the difference. The word "evolution" has no specific meaning to you other than "change." Evolution is a tiny subset of the enormously broad concept of change. It is not a synonym. Your understanding of the word is not the same as what the mechanisms of evolution describe and propose. It follows that it is not possible for you to understand those mechanisms. You highlight your lack of understanding by comparing those mechanisms to anything and everything that changes, including computer software. Correct me if I'm wrong. Tell me how it is possible to understand the proposed mechanics of evolution or any of the evidence for or against them without knowing what 'incremental change' is. ScottAndrews2
I was mocking Dembski's statement. Somehow 'Darwinists' are expected to supply detailed scenarios, while *Poof!* is sufficient for ID. champignon
I get the point, but I'm not asking for detail. I'm asking for a conceptual framework that is not evolution, but which would allow finding coding sequences. I'm told they are isolated, and if that's true they are as hard to find as cipher keys of equivalent length. ID purports to be based on analogy to human intelligence. The business model of the Internet assumes that human intelligence cannot break cipher keys beyond a certain length. Military and diplomatic communication also makes this assumption. Now it is possible that there are selectable substrings that haven't been discovered. Or some other back door. But ID advocates show no interest at all in discovering how a designer might work. I've designed several original products, and I know my process is derivative, iterative and incremental. I notice that commercial software also seems to change incrementally. Petrushka
Petrushka:
It’s not a double standard to ask ID to provide at least an hypothesis as to how a designer would navigate the functional landscape.
William A. Dembski:
ID is not a mechanistic theory, and it’s not ID’s task to match your pathetic level of detail in telling mechanistic stories.
champignon
gpuccio,
...to affirm dFSCI we must jhave considered all known necessity explanations, and found them laking.
dFSCI, as you compute it, considers only blind search. It does not consider evolution. There's a very easy way to see this: come up with a formula for the probability of hitting a target by blind search. Express it in bits by taking the negative log base 2. What do you get? Exactly the same formula you presented for computing dFSCI. By considering only blind search, you are assuming that the probability of evolution is zero. But that's the very question we're trying to answer!
...the computation of the quantitative functional complexity means that, but it must be supported by an empirical faslification of proposed necessity mechanisms.
But then the "quantitative functional complexity" part doesn't do anything. All the work is done by the purported "empirical falsification of proposed necessity mechanisms". The dFSCI number is irrelevant.
Before the introduction of CSI and dFSCI, nobody in the darwinists field had really cared to quantify the probabilistic resources needed to get to a specific functional result.
But the only thing you are quantifying is the probability of hitting a predefined target using blind search. Evolution is not a blind search, and it does not seek a predefined target.
After the introduction of dFSCI, the question is: we are sure that this result cannot be explained by simple RV.
Nobody in the world thinks that the ribosome or the eye are the products of "simple RV", without selection. You are answering a question that nobody is asking. dFSCI changes nothing. champignon
It's not a double standard. Evolution must demonstrate that ithas the toolkit to produceth kinds of change observed in cousin lineages. That's why tens of thousands of biologists have labored for a century and a half to document the processes. The simple fact is that your "something" is observable in case after case and can be studied in controlled experiments. It's not a double standard to ask ID to provide at least an hypothesis as to how a designer would navigate the functional landscape. Petrushka
gpuccio,
dFSCI is a realiable indicator of dFSCI.
Well, that's one thing we can agree on. Does it do anything else? champignon
Of course the trick is ruling out naturalistic explanations
Observe this double standard. When seeking evidence that these "natural explanations" even exist to be ruled out, one is chastised for unreasonable expectations. What do we want, a history of genetic changes over millions of years? A videotape? Ok, fine. But if such detail is unreasonable when demonstrating the positive, then how is it possible to demonstrate the converse, that such things did not happen? The naturalistic explanation, in its present form, boils down to, 'Something happened. That something likely involved some variations, and likely something was selected for some reason. Over and over and over. And likely some other stuff happened, too, details TBD.' How does one "rule out" such fluff? How does this nonsense not fall outside of science? It's a double-edged sword. You exercise special pleading to excuse the theory from providing any specifics, whether the mechanisms or the operation of those mechanisms. And then, having proposed essentially nothing, you insist that it must be ruled out. Such flimsy reasoning offers no substantial reason why it can't be completely reversed. Why not assume that what appears to be designed is, despite not knowing who, how, or why, and only resort to an undefined hodge-podge of vague things that may or may not have happened when the better option is ruled out? I'm not even proposing that. But you give no reason why one is better than or different from the other. Please don't argue natural vs. supernatural. I have referenced nothing supernatural. Please don't argue observed vs. unobserved. Nothing is observed. Everything is extrapolated or inferred. ScottAndrews2
Of course the trick is ruling out naturalistic explanations. You can't use probability until after you have ruled out evolution. The Voynich manuscript is a fine example of how difficult it can be to determine the history of a sequence. It could, for example, be the result of recording coin tosses or some equivalent. The various incarnations of CSI tell nothing about the history of a sequence. If they did, they would be able to detect a partial sequence, one that would be functional with a few changes or additions or subtractions. Petrushka
Petruska: "Shorter RNA chains were able to replicate faster, so the RNA became shorter and shorter as selection favored speed. After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. Such a short RNA had been able to replicate very quickly in these unnatural circumstances." Is that your idea of evolution and function? gpuccio
That appears to be a difference between us. I get your points and modify my responses accordingly. I haven't seen you demonstrate any understating of your critics. When Steve Jobs said simple is hard, was he implying loss of function? What is it that makes reproductive success such a difficult concept? Why do you persist in defining function in ways that are orthagonal to differential reproductive success? Petrushka
Petrushka, Common sense to me in its pristine sense is whenever you see an appearance of design is to suppose that it might not just be an appearance. What is nonsensical about it? Then comes the question of whether we can objectively test it and how. Prigogine was not the only Nobel prize winner. There are others. And I believe there may be very good scientists who don't get any prizes at all but whose work is prominent enough to direct and shape future scientific enquiry. Prigogin's work has been seriously criticised. What is IMO missing in all self-organisation research is empiricism. It just does not happen like that. Control does not emerge from chaos. As soon as processes in a system are coordinated in any way to achieve a goal (be it homeostasis or adaptation, metabolism, relication, reaction to simuli or anything else) it already points to choice contingency simply because nature does not care. This as far as I know has not been addressed in earnest by self-organisation type theories. If I am wrong please correct me. I am also curious as to why you single out GAs. IMO they are just as good or bad as any other algorithms on average. To claim that you can get a shortcut in a vast config space for free is IMO equivalent in some sense to claiming that P=NP. While this is a big question as yet unanswered, I think that it is not wise to call nonsense anything that questions spontaneous emergence of cybernetic control. Eugene S
Joe: I think you are perfectly right. Thank you for the clarification gpuccio
Petrushka: What are you trying to demonstrate? From Wikipedia: "Spiegelman introduced RNA from a simple Bacteriophage Q? (Q?) into a solution which contained the RNA replication enzyme RNA replicase from the Q? virus Q-Beta Replicase, some free nucleotides and some salts. In this environment, the RNA started to replicate.[1] After a while, Spiegelman took some RNA and moved it to another tube with fresh solution. This process was repeated.[2] Shorter RNA chains were able to replicate faster, so the RNA became shorter and shorter as selection favored speed. After 74 generations, the original strand with 4,500 nucleotide bases ended up as a dwarf genome with only 218 bases. Such a short RNA had been able to replicate very quickly in these unnatural circumstances. In 1997, Eigen and Oehlenschlager showed that the Spiegelman monster eventually becomes even shorter, containing only 48 or 54 nucleotides, which are simply the binding sites for the reproducing enzyme RNA replicase.[3] M. Sumper and R. Luce of Eigen's laboratory demonstrated that a mixture containing no RNA at all but only RNA bases and Q-Beta Replicase can, under the right conditions, spontaneously generate self-replicating RNA which evolves into a form similar to Spiegelman's Monster.[4]" I would say it is a very good example of involution, of loss of information. That's what the laws of chenistry can do (of course, with the help of the functional information in Q-Beta Replicase, an enzyme of "only" 589 AAs). Sometimes I really don't understand your points... gpuccio
champignon: For the nth time, you are wrong. Point 1 is wrong, because to affirm dFSCI we must jhave considered all known necessity explanations, and found them laking. Point 2 is wrong: the computation of the quantitative functional complexity means that, but it must be supported by an empirical faslification of proposed necessity mechanisms. This has always been explicit, both in Dembski's explanatory filter and in my definition of dFSCI. I have alredy answered point 3. (in my post 23.1.2.1.1 and in my post 36. I am not aware of any comment from you about those answers. Point 4. is wrong. Before the introduction of CSI and dFSCI, nobody in the darwinists field had really cared to quantify the probabilistic resources needed to get to a specific functional result. Even now, and even with all the pressure created by ID in that sense, most darwinists try to bypass the problem, or simply to believe that it does not exixt. Therefore, the neodarwinian algorithm has been for a long time an explanation based vastly on RV, without any attempt to quantify if RV could do what it was supposed to do. dFSCI is a quantitative tool to do that. Your attitude, and your repeated, unsupported attempts at denigrating it, are the best evidence of the antiscientific, irrational attitude of darwinists about a problem that evidently disturbs them very much. Point 5. is wrong. After the introduction of dFSCI, the question is: we are sure that this result cannot be explained by simple RV. Can we offer any credible detailed explanation of how it occurred? In the light of the above, point 6. is obviously wrong. gpuccio
champignon: dFSCI is a realiable indicator of dFSCI. What you seem ro forget is that affirming that an object wexhibits dFSCI, and therefore allows a design inference, implies, as clearly stated in my definition, that no know algorithm exists that can explain the observed function, not even coupled to reasonable random events. That's why evaluating dFSCI and making the design inference is more complex, and complete, than simply calculating the target space search space ration. It includes also a detailed analysis of any explicitluy proposed necessity explanation of what we observe. Therefore, if correctly done, the evalòuation of dFSCI allows the design inference, and answers your objections, because affirming dFSCI equals to say: we known no credible way the observed function could have evolved. As already said, I have analyzed in detail the credibility of the neo darwinian algorithm, including its necessity component, and found it completely lacking. Therefore, my belief that protein domains exhibit true dFSCI, and allow a design inference, is well supported. gpuccio
KF,
Pardon, but you are simply setting up and knocking over more strawmen.
If so, then you should be able to 1) identify specific statements of mine that are wrong, and 2) explain precisely why they are wrong. Let's try that with my comment at 40.1.1. Which of the numbered sentences do you think are false? Justify your answer.
1. As I have already explained, “X has high dFSCI” does not mean “X could not have evolved”. 2. All that “X has high dFSCI” means is that “the predefined function of X could not be found in a reasonable time by a completely random blind search.” 3. Evolution doesn’t look for single predefined functions, and it doesn’t proceed by blind search. Thus dFSCI tells us nothing about whether X could have evolved. 4. Before the introduction of dFSCI, the question was “Could X have evolved, or is it designed?” 5. After the introduction of dFSCI, the question is “Could X have evolved, or is it designed?” 6. dFSCI has contributed nothing to the discussion. It is an irrelevant metric.
champignon
I don't necessarily believe that every protein sequence can be bridged to another with each step functional. That's part of the communication problem. Bear in mind that new proteins are rare. Nearly all evolution is in regulatory sequences. Petrushka
I don't think Axe has demonstrated anything significant about sequence space. He didn't test evonlutionary scenarios. Petrushka
Ch: Pardon, but you are simply setting up and knocking over more strawmen. Please remind yourself of the basic, generic sci method as we are all familiar with from school. The question is not whether chance variation + differential reproductive success --> descent with modification [adaptation or specialisation or loss of prior function not advantageous in a given stressed environment], or whether chance variation has a target for variation. The question is that there is a degree of complexity and specificity of configuration that achieves an observed function, that is from a relatively narrow zone of a much wider space of possible configs. Consequently, a random walk in the space that is not correlated to its structure and zones of functional configs, will be maximally unlikely to hit any such zone, precisely because it is a blind random walk on the resources of our solar system or the observed cosmos. Of course if we are in such a zone, T, we may profitably discuss incremental adaptations, but that does nothing to answer to hoe we can hit the required shorelines. And, contrary to what you wish were so, dFSCI, and similar quantifications of such search challenges, are valid, are empirically substantiated as reliable signs of cases where the sort of blind -- non foresighted -- search described will fail with maximum likelihood. In addition, we have billions of cases in point that such dFSCI, where we directly and independently know the cause, is produced by intelligence. We have every epistemic right to trust it as a reliable index of design, absent a specific counter instance that is credible. It is worth the while to remind ourselves from Newton in Opticks, Query 31, which elaborates on Newton's rules of reasoning in Principia, especially Rule 1 that Joe is so fond of naming:
As in Mathematicks, so in Natural Philosophy, the Investigation of difficult Things by the Method of Analysis, ought ever to precede the Method of Composition. This Analysis consists in making Experiments and Observations, and in drawing general Conclusions from them by Induction, and admitting of no Objections against the Conclusions, but such as are taken from Experiments, or other certain Truths. For Hypotheses are not to be regarded in experimental Philosophy. And although the arguing from Experiments and Observations by Induction be no Demonstration of general Conclusions; yet it is the best way of arguing which the Nature of Things admits of, and may be looked upon as so much the stronger, by how much the Induction is more general. And if no Exception occur from Phaenomena, the Conclusion may be pronounced generally. But if at any time afterwards any Exception shall occur from Experiments, it may then begin to be pronounced with such Exceptions as occur. By this way of Analysis we may proceed from Compounds to Ingredients, and from Motions to the Forces producing them; and in general, from Effects to their Causes, and from particular Causes to more general ones, till the Argument end in the most general. This is the Method of Analysis: And the Synthesis consists in assuming the Causes discover'd, and establish'd as Principles, and by them explaining the Phaenomena proceeding from them, and proving the Explanations.
This of course seems to be the root source for the sort of sci method summary you would have met in school. The emphasis on induction and acknowledgement of limitations and confidence are instructive. GEM of TKI kairosfocus
GP: Cf here on from a recent textbook by a major US publisher, on the [darwinian macro-]evo is a fact game. Notice, onward, how Wikipedia blandly tries to redefine what "fact" means. KF kairosfocus
Petrushka,
"The model for a disconnected sequence space is a cryptogram. There is no way to navigate incrementally to a solution. So GAs and evolutionary algorithms cannot break modern encryption. Only brute force."
This is exactly my problem. The islands of function that are frequently referenced on this blog likely contain well connected segments of sequence space, within the real estate of each island, corresponding directly to functional significance. So the question isn't necessarily if any of sequence space is well connected, but whether a substantial part of it is -- at least enough to provide navigation between basic protein domains. I guess in my mind, these separate domains are separate islands; so even with chunks of functionally connected sequences, which there are sure to be. The entire sequence space is vast, and there's a fairly small sliver of likelihood that any disconnected search will bridge the gap. I'll try to explain why I think so. This is the cryptograph problem, in my reasoning. If we accept for the sake of argument Axe's 10^-74 value for the ratio of folding sequences (this in a a space of 20^150) then we have essentially the same cryptographic search issue, one hit for every 10^74 values. If we generate a key in that space size, it's about the same as a 246 bit key (log2(10^74) ~= 2^246).
We consider a 128 bit key to be pretty safe, but supercomputers can break it with brute force. At some key size, the resources of the universe cannot break it.
I think that the 10^-74 folding ratio presents just such a problem.
When you argue that DNA sequence space is disconnected and cannot be navigated incrementally, you are saying it is equivalent to a cryptogram.
I believe that unless sequence space is so well connected, such that for every protein domain a bridge can be built to another, we are left with a brute force improbability/implausibility for any blind search. Of course you are aware that I don't think intelligence is limited purely to trial and error by blind search.
You then assert that something called “intelligence” can break it. I’m sorry, but I don’t buy it. I am not aware of anything intelligence brings to the table that enables breaking a modern cypher.
I definitely get the issues you have with intelligence, at least to some degree-- that you suppose intelligence can only function by evolutionary processes of trial and error. While I agree that intelligence can make use of trial and error, as is evident by our use of computers and computation to solve certain types of problems, I don't think it is limited to such. We have the ability to imagine abstractions of physical systems and then to actuate them concretely via the manipulation of matter. This isn't an ability or a property of intelligence to be trivialized, IMO.
Older cyphers are connected spaces and can trivially be broken by GAs.
Yes I think so, GAs or other heuristics.
So the problem for both evolution and ID is to characterize sequence space. Douglas Axe has notice this and has made what I consider to be an interesting attempt. I don’t buy his conclusions, but I accept his characterization of the conceptual problem.
I agree that the nature of sequence space can potentially be a problem for evolution and for ID, but not for the same reasons. I tend to hold the view, as I'm sure you're aware, that intelligence is capable of bridging disconnected spaces conceptually, although I think you have some advantage here, considering we don't yet possess the ability to understand folding and function to such an extent that we are able to engineer proteins for purpose. So to that extent I can't demonstrate that human intelligence is up to the task, I could only give rationale why I think it's not impossible to conceive of. Here is something interesting, if not relevant: Crowdsource gamers best computers on protein folding. More here: PDF And here: http://fold.it Apparently there's something about intelligence that "gets it" even when the problem is only partially defined. material.infantacy
I suppose it is like the old chicken and egg problem, adding some questionable attempt to unravel it to an implication. =D material.infantacy
The old chicken and egg problem. Without trying to evade the problem, I suspect the official position is that eggs (in general, not chicken eggs) preceded chickens. At the moment it seems likely that replication preceeded the code, but it's a nice problem. Petrushka
"Evolution works as a property of chemistry. Spiegleman’s monster, having only a few dozen base pairs, evolved. When you bet against evolution you are betting against chemistry."
I think evolution is a property of chemistry, in the manner that computers are a property of electricity, and airplanes are a property of fluid dynamics. There is a huge issue here, to my mind. The idea of chemical necessity/evolution requires two separate theories of protein evolution that need to be detailed. The second is the algorithmically guided search through sequence space performed by living organisms to facilitate their reproduction and variation. This is a mechanism of stunning sophistication. This is the given. As the story goes, this carefully crafted mechanism gives rise to novel protein emergence by way of reproductive trials upon random variations in living organisms. We're told that this mechanism facilitates the generation of novel functional complexity, and that it's the host to both random and non-random variations which give rise to said novel complexity. This DNA-based replicator is the capable engineer of unique proteins which facilitate the hill-climb skyward, to the top of mount improbable. And yet, this process presupposes the proteins, and their integration into a system, required to make this mechanism function. DNA-based replication requires a host of functional proteins, each necessary for reproductive success. I don't think anyone would try and argue that we can explain the emergence of, say, the set of polymerases, by processes which require the presence of those same polymerases in order to function in the first place. The common interlocuter here at UD would say, that the origin of life problem is distinct from Darwinian evolution, and that the two shouldn't be confused. Therefore, we're left with an entirely separate chemical process -- the first one, which requires no DNA, and requires no proteins, but can accomplish the same feat: the sequencing of proteins which can perform specific functions, by way of a blind process. This chemical mechansim must also give rise to the more sophisticated mechanism, DNA-based replication, by building the same order of highly specific proteins which allow the DNA-based replicator to perform, and itself go forward to produce more uniquie proteins. Since each distinct system relies on reproductive success, the manufactured proteins must provide functions conferring selective advantage in both types of disparately functioning systems, or they cannot be selected, within the context of either system. So where the DNA-based replicator is presupposed, in order to design and manufacture novel structures, there must exist a separate and distinct system which accomplishes the same feat, while giving rise to the successor system. Both systems are said to be evolutionary -- yet we're reminded that one has nothing to do with the other -- that Darwinian evolution is a separate problem from OOL. Assuming proteins can be designed as well as manufactured inside of a DNA-based system, those same types of proteins must also be the product of a separate chemical system, antedating the first, which functions by different rules yet produces a similar product. Both systems must be able find sequences which fold, and have a function which confers a selective advantage at that specific time in the organism's history. Both processes must be linked by common proteins, which confer selective advantage in both systems, if one is to give rise to the other. Both systems must possess the ability to navigate mind-explodingly vast sequence spaces to find functional configurations that not only fold, but are relevantly functional, simultaneously, to both uniquely operating processes. So I'm wondering what this all means. Is there a third, overarching force in the universe, called evolution, which readily produces similar products in differing ways -- one system giving rise to another other -- or are there two distinct phenomena which need to be explained by differing mechanisms altogether; or is it both. In either case, protein "evolution" requires at least two disparately operating mechanisms-- the simpler, antecedent one coordinating the construction of the vastly more sophisticated, DNA-based one, by constructing integrated systems required by the second, that can also confer advantage in the first. One system is a DNA-based replicator, requiring a tall minimum of preexisting function (DNA, RNA, polymerases, spliceosomes, synthases, etc., and their corresponding DNA codes) and the other, prior system produces the same or similar products, plus the entire succeeding system by a completely different mechanism, having a pittance of the functional complexity. Of course none of this is a problem for evolution. ;wink; material.infantacy
gpuccio:
Well, my emotional reaction about that are quite different. Those disclaimers were for me one of the meanest things I have ever witnessed.
It seems to me that the meanness or otherwise of the Lehigh disclaimer could be better judged by placing it here:
Department Position on Evolution and "Intelligent Design" The faculty in the Department of Biological Sciences is committed to the highest standards of scientific integrity and academic function. This commitment carries with it unwavering support for academic freedom and the free exchange of ideas. It also demands the utmost respect for the scientific method, integrity in the conduct of research, and recognition that the validity of any scientific model comes only as a result of rational hypothesis testing, sound experimentation, and findings that can be replicated by others. The department faculty, then, are unequivocal in their support of evolutionary theory, which has its roots in the seminal work of Charles Darwin and has been supported by findings accumulated over 140 years. The sole dissenter from this position, Prof. Michael Behe, is a well-known proponent of "intelligent design." While we respect Prof. Behe's right to express his views, they are his alone and are in no way endorsed by the department. It is our collective position that intelligent design has no basis in science, has not been tested experimentally, and should not be regarded as scientific.
PaulT
Not everything is compatible with evolution, a functional sequence space that cannot be connected incrementally is not compatible with evolution. You guys are on the right track. I just happen to think you are wrong in the characterization of the space. Petrushka
Yet again, begging the question by comparing the hypothetical to the observed, and in spectacular fashion. And creating a false choice - accept or reject both chemistry and evolution. Why use examples that make the exact opposite of your point? It's not the examples that anyone objects to. They never have even the vaguest relation to the origin of anything. It's the bizarre extrapolation, imagining that the "evolution" of 4500 bases to 218 bases, losing the function of coding for proteins in the process, can tell us where the 4500 bases and the function they had came from. Or where anything came from, or why, or how. It never ceases to amaze me how absolutely anything and everything is a confirmation of evolution, even evolving something into a fraction of itself and losing its function. No wonder there's a mountain of evidence. I don't think it's possible to swim against this current. ScottAndrews2
I see nothing common sensical about favoring a science fiction fantasy over observable phenomena. I see nothing sensible about postulating a designer who magically plucks several hundred bit cipher keys out of thin air. If you want a Nobel prize, demonstrate how the hypothetical designer overcomes the big numbers. Produce a theory of design that does not require any subset of evolution. Alternatively, demonstrate that Thornton is wrong. Petrushka
Evolution works by transitioning through successive functional intermediates, not by exhaustively sampling the search space. The important question is how well-connected the functional space is, not the ratio of target zone to search space.
I'd certainly like to see someone respond to The cipher key analogy. If functional sequences are truly isolated they are mathematically equivalent to cipher keys of equivalent length. I know of no theory that provides intelligence of any finite power to break cipher keys of lengths equivalent to coding sequences. How does the designer do it? In the Lenski experiment, evolution did it with brute force, trying every combination. But of course functions were connectable. My question would be, what evidence is there that function is not connected? Petrushka
KF, As I have already explained, "X has high dFSCI" does not mean "X could not have evolved". All that "X has high dFSCI" means is that "the predefined function of X could not be found in a reasonable time by a completely random blind search." Evolution doesn't look for single predefined functions, and it doesn't proceed by blind search. Thus dFSCI tells us nothing about whether X could have evolved. Before the introduction of dFSCI, the question was "Could X have evolved, or is it designed?" After the introduction of dFSCI, the question is "Could X have evolved, or is it designed?" dFSCI has contributed nothing to the discussion. It is an irrelevant metric. champignon
That's what Prigogine would have said. He called Darwin the greatest chemist in the world in his Nobel lecture, if I remember rightly. However, IMO that is an overstatement. As others have pointed out since Prigogine's time, when one bets against evolution, they bet for statistics and common sense. Prigogine's theory fails to explain the emergence of control. It may seem that Prigogine's or any other self-organisation theory does away with the mystery of life. But that is only the first impression. Eugene S
It is very much similar to accusing coloured people in the sixties of being paranoic for believing they were not treated fairly.
what exactly would you teach that is currently forbidden? Please be specific. Give us a three or four sentences statement of things that are currently not allowed, but which need to be said. Petrushka
Petrushka, I'll bet even more money that if you re-read my post, you'll find I never bet against learning anything. But what do you mean by "progress?" You suspect, as many do, that all these things come from darwinian evolution. If they did, then learning more about how they did is progress. If they did not, then it is impossible to "progress" toward learning how they did. Whenever you use the word "progress" in that sense you reveal that to you, the future of scientific discovery is a foregone conclusion. Somehow you magically already know where it's going to lead. That's the only way that you could call a step in any direction "progress." ScottAndrews2
Ch You are leaving out some crucial steps, and so setting up a strawman that you then knock over. This, in spite of great pains taken to be clear. GP has nothing to retract, and is quite correct: 1 --> Function can be objectively identified, in relevant cases. 2 --> So can complexity. 3 --> So can configurational specificity of function. 4 --> So can digital codes. 5 --> So, then can be dFSCI, which is in fact commonly observed, e.g. posts in this blog. 6 --> It is observed that some functions can be reached by chance, e.g. the random text generation cases up to 24 ASCII characters. 7 --> However, as just indicated, these tend to be simple in the relevant sense, well within the sort of limits that have been identified for our solar system or the observed cosmos. Practically and simply, 500 - 1,000 bits. 8 --> digitally coded, functionally specific, complex information is actually commonly observed, e.g. posts in this blog, the Internet, libraries etc etc. In many of these cases we separately know the cause of origin. 9 --> In all these known cases, the cause of dFSCI is intelligence. There are no credible counter-examples. (The GA case is not a counter example for reasons pointed out above and elsewhere, over and over again.) 10 --> On fairly standard analysis, we can see why cases of such dFSCI will come from narrow zones T, in much wider spaces of the possible combinations of elements. So much so that on needle in haystack grounds, to get to zones T by chance based random samples of eh domain W, will be maximally improbable. (Hill-climbing algorithms and processes that operate within islands of function as just outlined, will be irrelevant, e.g. GAs.) 11 --> In short, dFSCI is a strong INDUCTIVE sign of design, and can be taken as a reliable sign of design, subject to the usual provisionality of inductive inferences in science and generally. 12 --> to overturn this, all that is needed is to provide a solid counter-example. Just as, to overturn the laws of thermodynamics, all that is needed is a solid counter example, and just as right now it looks possible -- not yet final -- that Einstein's c as universal speed limit postulate just might have met its counter example. 13 --> So, your objection above is a strawman error. GEM of TKI kairosfocus
Regarding science education in schools - I don't know what everyone else's experience was. There was some discussion of the scientific method, but nothing nearly on the level of what you described. Not even in the ballpark. 90% of science was memorizing stuff related to science. As for my specific, strongly-worded charges against science educators in this country: Appeal to authority - it's stated over and over that most scientists are certain that this is where biological diversity came from. It's not wrong to state that if it's true, but that's what they lead with. It tells students up front that they don't need to critically analyze any of the weak evidence to follow, because really smart people already did that for them. Exaggeration and misinformation - Jonathan Wells has this covered quite well in his examinations and findings from textbook contents. Here are some specifics. What if his take is completely slanted? They're still teaching Miller-Urey, Archaeopteryx, Haeckel's embryos, and the peppered moths. One can't help but wonder why they can't find something better to fill those pages with. What about an explicit directive to withhold evidence and avoid critical examination? It sounds too twisted to be true. But What does the NCSE say on their very own web site? Some text from proposed Oklahoma legislation, as quoted directly in the NCSE's own press release:
The bill also provides that teachers "may use supplemental textbooks and instructional materials to help students understand, analyze, critique, and review scientific theories in an objective manner." This bill does not propose that schools teach creationism or intelligent design, rather, it is the intent to foster an environment of critical thinking in schools including a scientific critique of the theory of evolution.
What's astounding is not that the NCSE calls this "anti-evolution." It's that they don't even see the need to say why they do. They simply imply that the wrongness of analyzing and critiquing a scientific theory is self-evident. If critique of a scientific theory is evidently anti-science, then in what context would any contradictory evidence be raised? In what context may a science teacher state that no one knows how a single protein domain might have evolved? What you have reasonably acknowledged as accurate they would brand unmentionable. Could that information affect a student's perception of the theory? It might. It should. But students are to be intentionally and carefully kept ignorant of such knowledge unless they seek it outside of the classroom. That is an explicit directive to withhold evidence and avoid critical examination. Here is there published list of supporters, beginning with the AAAS, publishers of Science. That's right, the publishers of Science fund efforts to oppose permitting critique of consensus science in the classroom. I realize the apparent contradiction as I accuse educators of withholding information while citing legislation that proposes sharing it. But look at where the opposition comes from. I'm not aware that even educators themselves are specifically opposed as a group to meaningful teaching of science. But the opposition to it is real, and is supported by associations of mainstream scientists. The NCSE does not have the power to dictate the standards of science education, but it is funded to speak on behalf of mainstream science and does so with the consent of the community. That they don't win every battle does not minimize how screwed-up it is that they are fighting it. The concept of consensus loses validity when the community seeks to ensure that students are not taught the value of questioning that consensus, and even enters legal battles to ensure that they don't. By your own standards, which I certainly agree with, such people should be sent to British schools for remedial education rather than influencing the science education of others. As a disclaimer of my own, I'm fully on board with not teaching ID in public schools. I don't think it's ready. I do think that idea of design would fare better in the minds of many students if they were bombarded with less propaganda to the effect that darwinian evolution is beyond questioning and encouraged to do more than skim over what looks like confirming evidence. ScottAndrews2
gpuccio,
You seem to imply that my reasoning is: dFSCI exists, therefore biological information is designed. It’s not that way.
You wrote this earlier in the thread:
dFSCI is an empirical concept. The resoning goes this way, in order a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence. b) We define dFSCI as such a property. c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives. d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information).
And as recently as October, you were saying things like the following:
IOWs, a purely random system cannot generate dFSCI. A purely random system + NS cannot do that. If you know other necessity mechanisms that can be coupled to a purely random system and behave better, please declare what they are. Design can generate dFSCI (very easy to demonstrate). Therefore, the design inference remains the best explanation for what is observed (dFSCI in biological beings) and cannot be explained in any other way. [Emphases mine]
Which is it? For the record: Do you claim that dFSCI is a reliable indicator of design, or do you retract that claim? champignon
Evolution works as a property of chemistry. Spiegleman's monster, having only a few dozen base pairs, evolved. When you bet against evolution you are betting against chemistry. Petrushka
If your money is bet that we will never reconstruct all the genomes that ever existed, you are safe. Or find a fossil representing every species that ever existed. If you are betting against progress on these front, you will lose. Petrushka
No, I don't think you think I lie to children! I just want you to know how passionately I think you shouldn't :) As for your portrayal of how biology is taught in America: I can't comment (I'm a Brit). But I can't believe it is as bad as you imply. Can you give me an example (for when I get back) of the "appeal to authority, exaggeration, misinformation, and above all, an explicit directive to withhold evidence and avoid critical examination"? Because that is the exact reverse of any proper brief for a science curriculum! Which should be: Appeal to empirical evidence and logical argument Consideration of limitations Accurate measurement Consideration of all the data, and of problem of cherry-picking Critical examination of all conclusions and the generation of alternative explanations for the data. I could almost have typed that off any science education program :) Elizabeth Liddle
Google is working on cars that drive themselves. The potential for safety and more efficient traffic flow is huge. Plus we wouldn't need parking lots because cars would drop us off and pick us up. We could use them as a service rather than have our own. But I don't see what you're getting at. This is all stuff we design. None of it is possible without setting a target. Even if evolution could innovate all this stuff, what would you get without setting a target? The best blender ever that can chop anything, rolls over obstacles and climbs vertical surfaces to reach the vegetables, and writes poetry? ScottAndrews2
You can also get some really cool stuff if you take some English text and use Google Translator to run it through several languages and then back to English.
That, or should have been the issue. If 'tis nobler in the mind of the bear Radio and slings and outrageous fortune Or to take arms against a sea challenge And by opposing end them? Kufa, to sleep
V: You Beat: Resistance is futile. Let you have not been corrupted as Obi-wan. V: Is there any escape? I do not destroy them: Luke, we recognize its importance. Are beginning to reveal its power. Participation in, and I complete their training. Our collective strength to our Galaxy, we can end this destructive conflict. L: I can not join you. V: If you only knew the dark side of you. WAN group is that what happened to my Father. L: He said to me. He killed you. V: No, I am your father. L: No, no, it does not correspond to reality: it is not possible. V: Search your feelings, you know, that's true. L: No!
ScottAndrews2
Ever read James Joyce? I'm not going to argue that programs can do everything humans can do. I will predict that computers will tend to take over more and more tasks that were once the domain of brains. At first they won't do them as well, but eventually they will do them better. Already there are airplanes that can't be flown by humans. I suspect in 50 years humans won't be allowed to drive on highways. It's alosing bet that evolutionary algorithms will not improve and begin managing more and more of our civilization. Of course they may have managed the financial bubble, so they can make the same kind of mistakes that humans make. Petrushka
But I think you are quite wrong in asserting that simple algorithms cannot generate original utterances.
Such as this:
I ate my leotard, that old leotard that was feverishly replenished by hoards of screaming commissioners. Is that thought understandable to you? Can you rise to its occasions? I wonder. Yet a leotard, a commissioner, a single hoard, all are understandable in their own fashion. In that concept lies the appalling truth.
Is that what you had in mind? ScottAndrews2
if incremental change is possible, that evolution will work.
I had no idea that this was a sticking point. I'm less than nobody. But for what it's worth, if separate protein domains or genomes are traversable by individual selectable variations then of course evolution would work. I don't think that anyone is saying that it's impossible just for the sake of saying it's impossible. But on the surface it appears implausible. More in-depth, rigorous examinations only reveal in more detail what an evolutionary process has to accomplish, and that hasn't worked in its favor at all. That leads to a reasonable, rational assumption that darwinian evolution is not the answer and hits the ball back into that court where it will stay forever or until some truly astonishing evidence is revealed. My money is on forever. ScottAndrews2
Elizabeth, I hope it goes without saying - I don't think you lie to children or anyone else. The evidence I'm aware of supporting and contradicting the extent of darwinian evolution is one thing. This makes up an entirely separate line of evidence. I find it incredible to believe that something supported by an abundance of evidence and not substantially contradicted can only be taught to children and young adults by appeal to authority, exaggeration, misinformation, and above all, an explicit directive to withhold evidence and avoid critical examination. It's damning. It's the kiss of death. It's commendable that you reject it, but most condone it. If they did not, Eugenie Scott wouldn't be able to attain an educational post higher than gym teacher. Those who condone it are complicit, and even if they told me the earth was round I'd want a credible second opinion. For the scientific community to condone such educational goals destroys their credibility with regard to this subject matter. You might question how they have condoned it. Okay, to regard it with ambivalence destroys their credibility. To silently disapprove at a minimum damages their credibility. I express myself in such strong words because I perceive a spirit of complacence. Truth does not hide from the light and try to break all the light bulbs. On top of the insufficiency of confirming evidence and the weight of disconfirming evidence, too many proponents of darwinian evolution behave conspicuously as though they have something to hide and openly wield ignorance as their weapon of choice. That's one heck of dark cloud hanging over it. ScottAndrews2
...an example of text that certainly has more than 1000 bits of functional complexity, according to my demobstration, and therefore allows a safe design inference. How do you believe that text was written? By evolution?”
I certainly know that brains embody evolutionary algorithms. It's quite clear when studying animal behavior and learning, which is the subject of my formal training. It's not as clear in the case of language. There was, about 40 or 50 years ago, a rather famous debated between B.F. Skinner and Noam Chomsky on this topic. Chomsky took the anti-evolution stance and convinced the linguistics community. He expounded a version of irreducible complexity that sounds exactly like Behe's. My own position is that they were both right and both wrong. This happens in the infancy of any science. Strong positions are taken by people before a phenomenon is understood. It seems necessary for the formation of testable hypotheses. But I think you are quite wrong in asserting that simple algorithms cannot generate original utterances. I don't think computers can pass an extended Turing Test, but they can certainly churn out syntactically correct sentences and paragraphs. They are quite capable, for example, of exploring a novel environment and generating a description that would pass any test of grammatical correctness. Not Shakespeare, but then neither am I. Petrushka
So you do realize this. Does it not follow that GA searching one space is no indication of whether one could search another?
Of course I realize this. I am completely committed to the concept that evolution is impossible if incremental change is impossible. I think it would be interesting if ID advocates would admit that if incremental change is possible, that evolution will work. There are, of course, many kinds of known genomic change. Koonin and Shapiro have listed many. I've recently seen the number 47 kinds mentioned. You also have to realize that change and selection are not limited to refining function. There are many kinds of sideways change, and many kinds of utility that are not obvious from a narrow utilitarian view. It is not obvious, for example, why dragging around pounds of tail feathers is useful. It seems to attract mates, but it the reason the mates select for tail feathers is obscure. It must also be noted that a simple loss of one function may be useful. Hence the famous loss of function mutations that enhance survival of bacteria exposed to antibiotics. These are some of the reasons that design is a troublesome concept. There is no single dimension of change that is obvious except in retrospect, and even then it may not be obvious. But back on topic, I think there is a reason why Thornton et al are looking for incremental pathways. It's an obvious entailment of evolution that such pathways exist. It's also true that pathways may be obscure. Petrushka
"Evolutionary algorithms are the only process known to be able to navigate huge search spaces. The ID movement attributes magical capabilities to “intelligence” — the ability to see function in sequences that are completely disconnected. That is simply magic, and I challenge you to demonstrate a non-magical way this can be done."
Actually huge search spaces can be navigated, and usually more efficiently, by other algorithms and heuristics. I would hazard to predict that any ordered search space will be shown to be navigable by algorithmic methods at higher efficiency than a genetic algorithm. Besides, a successful GA will generally take advantage of existing algorithms and heuristic methods in order to provide trial and error optimizations that are otherwise impractical to test experimentally. This is all the wonderful application of intelligent solutions onto reasonably defined problems in order to arrive at an optimal solution. We do it all the time, but blind forces do not. Algoritms, genetic or otherwise, are both a product and tool of artifice. The "magical" capabilities of intelligence are nothing more than that which we observe by direct experience. Indeed, they seem quite magical, considering that neither a chainsaw nor an iPhone would ever come about without intelligence. It is the observation of highly contingent and specific, functionally purposeful configurations that register positive for design when we see them. What we observe in the chainsaw and in the iPhone are attributes for which non-intelligent causes are instinctually ruled out, because they are inadequate. It is those attributes which defy material explanations, absent the question-begging invocation of material processes as a cause for intelligence. Of course, we should welcome with open arms attempts to identify non-intelligent mechanisms for producing chainsaws, iPhones, satellites, and airplanes. However barring that, we note that beings in possession of certain attributes, namely the innate ability to model abstract concepts and their relationships with one another, and then express those concepts concretely, by whittling away excess matter, and adding some here and there. This ability begs an explanation, and so far material processes have been utterly impotent to do so. Reason demands that we consider intelligence a unique force in shaping matter for purpose, with foresight -- so that when we observe functional purpose in concert with very low probability, we can infer design. If material processes are ever vindicated as a causally sufficient mechanism for producing sophisticated engineering, we'll have no need of a design inference. material.infantacy
That's not a plug btw, although I'd be delighted if you bought it. It just happens that I wrote it out of my utter commitment not to lie to my child. Elizabeth Liddle
Obviously I do not think that children should be taught bogus facts. It is because I refuse to lie to children that I wrote this: http://www.amazon.com/Pip-Edge-Heaven-Elizabeth-Liddle/dp/0802852572 Elizabeth Liddle
It is right that children are taught the consensus view, and that it is the consensus view, but that all such views are provisional.
I agree in a sense. The underlying problem is that the consensus view is unwarranted, not that it is being taught. But it seems clear cut that many educators esteem maintaining that consensus view even if it means going out of their way to withhold relevant knowledge. There are two separate issues. One is that such behavior is despicable and unethical. It's one thing to withhold knowledge, another to do so under the pretense of education. Anyone who believes that students should be presented only the evidence that will enable them reach one specific conclusion, and that thinking about contrary evidence is harmful has no place in a classroom, let alone writing policy. (This is what makes me cross.) It's unfortunate that when my son goes to school, I must forewarn him that although teachers usually aim to educate, at times they will deliberately mislead. Without a doubt, telling a child that 'most scientists' believe something and withholding available contradictory evidence is deliberately misleading. The silver lining is that it prepares him for the real world where he must learn to carefully weigh propaganda regardless of where it comes from. But that is no excuse. The second issue is, should not anyone and everyone renew their skepticism of a consensus view that is protected with lies and half-truths told to children? It seems so obvious that it shouldn't need saying, but one does not depend upon deception to teach what is true or provisionally accepted. If someone tells us a story and backs it up with bogus facts, the normal reaction is to disbelieve the story and question their motivation, not to comb through it for accurate details and make excuses for the rest. Why should this be any different? ScottAndrews2
Elizabeth:
It is IDists who have concluded that because Darwinian theory fails, ID is the default conclusion.
That is false and a blatant misrepresenation of the facts. So here it is AGAIN: Newton's First Rule AND the explanatory filter mandate that before a design inference be considered the more simple explanations must be eliminated, ie necessity and chance. So once we do that we can consider a design inference. ya see there is also the design criteria that has to be met.
Darwinians do NOT make the symmetrical claim that because ID fails, Darwinian theory is the default conclusion.
Hellooo- Newton's First Rule, the explanatory filter- helloooo- Darwinians don't make that claim because they are never in any position to make that claim. We go through you, you don't go through us-> Newton's First Rule and the EF. Get a grip already. Joe
Thanks, gpuccio :) I would agree with them re the theory of gravity actually :) The theory of gravity isn't really a theory anyway, just a law, i.e. a very good mathematical predictive model. It doesn't explain anything, it just is. Darwinian evolution really is an explanatory theory, as are all the bits and pieces that form part of the current (and continuously evolving) version. But obviously, I don't agree that it is a "fact". See you in a few days :) Elizabeth Liddle
Elizabeth: I am not so concerned about what students are taught. They should certainly be taught good philosophy of science and methodology, but I don't believe that happens. Regarding the issue of iD. I would say they should be taught the consensus view (neo darwinism), and that minority radically different views, including ID, exist. gpuccio
Elizabeth: By the way, I agree with you that there are some facts in the evolutionary theory. And I absolutely agree with you that a theory never becomes a fact. I have fiercely defended that point many times here. I have no reference available now, but as soon as I find some of the many examples of darwinists claiming that their theory is a fact, or that it is more certain than the theory of gravity, or similar epistemological rubbish, I will point it out to you. gpuccio
From elementary school onward students are taught that evolution is believed by most scientists to be a result of specific mechanisms. By high school graduation they have heard and read it countless times, usually unchallenged.
Well, it is absolutely true that most scientists believe that evolution comes about by Darwinian mechanisms. What is important is that students should learn that all scientific conclusions are provisional. If they are not taught that, then they are being taught badly. But that's not a question of scientific content, it's a question of scientific methodology. It is right that children are taught the consensus view, and that it is the consensus view, but that all such views are provisional. In my view they should NOT be taught ID (although they should certainly be taught that no theory is a fact, and that all theories can be challenged) because it is not, in my view, and in view of the vast majority of scientists, a legitimate inference from the data. Off for a few days now :) See you all later.... Cheers Lizzie But obviously we differ on that :) Elizabeth Liddle
Elizabeth: To quote you, I clearly I understand that is your view. I beg to differ. I cannot say more. gpuccio
And while you are right that we do not have a clear evolutionary account of the origin of proteins, it is not at all the only phenomenon for which we do not have, and do not claim, to have a clear evolutionary account. In fact we do not have an unambiguous evolutionary account for a single biological feature.
From elementary school onward students are taught that evolution is believed by most scientists to be a result of specific mechanisms. By high school graduation they have heard and read it countless times, usually unchallenged. It's commendable that you separate evolution from those mechanisms and realize that the evidence for their application to large-scale biological diversity is limited to irrelevant computer simulations. (I know, that's not exactly what you said.) But why is protecting students from knowing the state of the evidence for such things one of the biggest hot buttons in education? I can't think of a single other modern-day case in which so-called educators will go to great lengths to ensure that students are not educated. In 1984, words were systematically removed from the language to limit what people were able to form thoughts about. Understanding of the outside world was controlled through propaganda. What is this but a bold-faced attempt to control young persons' comprehension of reality by carefully and deliberately deciding which truths they may or may not know? Continuing the similarities, the battle is fought through the repetition of false propaganda until it sticks. War is peace. Freedom is slavery. ID is religion. Any questioning of evolution is an encroachment of religion. We're at war with Pacifica. This is institutionalized in the United States. I don't normally toss around terms like "thought control," but that is precisely what is being attempted, and with some success. If people aren't disturbed then they aren't paying attention or they are likely victims. ScottAndrews2
I'm not saying that Behe was doing that, it's just that the university is entitled to issue a preventative. It's not unusual - it happens from time to time when a university member has a controversial view that the university wants to dissociate itself from without censoring the member, and while continuing to allow the member to post what s/he wants on his/her page on the university website.
The problem is that Behe’s views are not only contrversial, but highly despicable from the academic point of view, because they are critical of one of the biggest dogmas of our culture.
Well, clearly I understand that is your view, but I'd say that the reason Behe's views are regarded as controversial by most scientists is because they consider that his views are not supported by good evidence or argument. And I agree. In contrast, when someone does raise a legitimate problem with some evolutionary account of some phenomenon, then while they might have to fight a bit to get published (it's harder to get a controversial finding published than a non-controversial one, but it will make more splash when you do), published it will eventually be. Margulis is an example, and her symbiosis theory is now widely accepted - not as fact, but as a well-supported theory - for eukaryote origins. I'm not of the view that ID is not science (not in principle anyway - that's why I was interested in Genomicus's post) but I am of the view that what science I have read in support of ID is extremely poor. That's why it doesn't get into peer-reviewed journals very often, not because of censorship. Elizabeth Liddle
If darwinists publicly admitted that their theory has not a single clue about how basic protein domains emerged, people could start thinking that after all evolution (theory) is not necessarily a fact, and that after all some of what those IDists are saying could be reasonable.
I would like evidence of this alleged confusion of fact with theory. Clearly evolution, when defined as "change in allele frequency over time" is a fact. Adaptation has also been directly observed. That does not make "the theory of evolution" a fact. No theory ever becomes "a fact". So whoever you are citing, mis-spoke. And while you are right that we do not have a clear evolutionary account of the origin of proteins, it is not at all the only phenomenon for which we do not have, and do not claim, to have a clear evolutionary account. In fact we do not have an unambiguous evolutionary account for a single biological feature. What we have, instead, is a theory that accounts for the distribution of those features, over extant populations and over time, which has repeatedly made predictions that were subsequently confirmed by evidence, and which invokes a mechanism (the Darwinian algorithm) that has not only been demonstrated to work in computer models but is actually used to generate novel solutions to engineering problems and aspects of AI. You seem to think that scientists are some monolithic powerful institutional body that has declared the "theory of evolution" to be an official "fact" that explains all biological phenomena, when there is, firstly, no such body, and secondly, AFAIK, no such pronouncement. It is IDists who have concluded that because Darwinian theory fails, ID is the default conclusion. Darwinians do NOT make the symmetrical claim that because ID fails, Darwinian theory is the default conclusion. Rather, evolutionary scientists have tremendous confidence, borne of experience, that evolutionary theory (and other scientific theories) can be found to account for current challenges to our understanding. Therefore we do not see any warrant to say: this is too hard to explain, therefore ID. It is not that we think that the ID hypothesis is impossible, or wrong, or verboten. It's that we find it completely unwarranted. Elizabeth Liddle
Elizabeth: Well, my emotional reaction about that are quite different. Those disclaimers were for me one of the meanest things I have ever witnessed. I must specify that I intended moral censorship of a person, of his scientific dignity, of his work. I certainly understand that Behe is still allowed to speak, and is not on prison. I am absolutely against censorship, especially religious one. So, we can agree on that, at least. And I have never said that Daukins should not have written what he has written. I was just giving an example of a scientist considering his theory as something that must absolutely be believed. Moreover, Nehe was not "airing his view on the university website", least of all "using his affiliation to lend authority to his views". As far as I know, he has never done that. The university disclainer was evidently an expression of the shame that the institution felt for just his existence and his affiliation to the university, as though that simple thing were a moral and academic blemish. Now, that's not only because Behe has contrversial views. Many scientists have controversial views on many issues, and their university has never felt the necessity to defend itself from that fact. The problem is that Behe's views are not only contrversial, but highly despicable from the academic point of view, because they are critical of one of the biggest dogmas of our culture. gpuccio
That is censorship. A university making it clear that it does not, as an institution, endorse a controversial view, and yet allowing that person to remain in post, and continue to air that view on the university website is not censorship.
It's not technically censorship. But it does call to mind the often misunderstood principle that the exception makes the rule. It indicates that when an exception to a rule is explicitly stated, it strengthens the rule by implying that there are no unstated exceptions. If a university were to post an disclaimer on every page indicating the views expressed by members are not necessarily those of the university, that would be one thing. But by expressing such a disclaimer regarding one member or one idea specifically, it creates an implied statement that that every other view on every other page is endorsed by the university. Those that are not endorsed are marked with a disclaimer. I wonder if that's what they had in mind. ScottAndrews2
But again, a disclaimer is censorship
No, it is not. It is precisely not. Preventing him from posting his opinions on the university website would be censorship. Allowing him to do so is not. Putting up a disclaimer is simplyj a ways of informing visitors to the website that the person's views are not endorsed by the university. This is important, because it prevents the person from using his/her affiliation to lend authority to his/her views, which often happens.
is emotional propaganda
It may be propaganda but that's what websites are - sites were stuff is propagated. Behe's page is propaganda; so is the disclaimer.
But people who write here are just people writing in a blog. Dawkins is a scientist, and the quote is from one of his scientific books.
So it's OK to say that your opponents are stupid on a blog, but not in a book? Or not if you are a scientist? Or not if you are a scientist writing a book? Why not? I thought you were against censorship? OK, I'd better go and cool off for a bit, but this stuff makes me a bit cross. I am very much against censorship, but the most egregious censorship I have read about in recent times has been religious, not scientific. In fact Dembski, one-time owner of this blog was apparently censured by his university employers for having questioned the literal fact of a global biblical flood, and apparently retracted. That is censorship. A university making it clear that it does not, as an institution, endorse a controversial view, and yet allowing that person to remain in post, and continue to air that view on the university website is not censorship. It is the very antithesis of it. Elizabeth Liddle
note to champignon, you had some objections to Near Death Experiences a while back. I think that Dr. Jefferey Long does a very good job, in this following lecture at 'The New York Academy of Sciences', in addressing your objections:
The Reality of Near-Death Experiences and their Aftereffects - Jeffrey Long http://www.youtube.com/watch?v=DIqKTE6jNmQ
bornagain77
Dr Liddle, No offence taken. I think that mainstream science is painting itself in the corner by philosophical a-priorism in much the same way as was done in the USSR. For a fuller picture, Lysenko was not a scientist, he was a practitioner selectionist of Michurin's school. I think that he had his own career agenda. My granddad used to keep for an example of anti-science a publication of Lysenko's speech in the Academy of sciences of the USSR. That was quite an exhibit. Understandably, people often think using cliches. Especially notorious are political cliches akin to Stalin = Hitler. I think that while a person in the street can be excused for doing so, a scientist cannot. It is sad when scientists cannot see that. Eugene S
Elizabeth: Fences are not necessarily political persecutions. In so called democratic countries, there are many other ways to build fences. If we were still at political persecutions, I would not be able to write here, and I would have spent in prison my last years :) gpuccio
Elizabeth: Ah, I forgot. The nest scientists are obviously well aware that no explanation can be given of basci protein domains according to neo darwinian algorithm. The reason is probably, at least in part, that we in ID have been raising that issue for a few yeras (see Axe's old paper about that). But most of biologists are not aware of that. You yourself were not aware of that, until I gave evidence that that was the case. Maybe even now you don't really accept that, and will answer with some new pseudo argument against that simple point. If darwinists publicly admitted that their theory has not a single clue about how basic protein domains emerged, people could start thinking that after all evolution (theory) is not necessarily a fact, and that after all some of what those IDists are saying could be reasonable. Or not? gpuccio
Elizabeth: Why should a university not issue a disclaimer? Why should people not express their views about other people’s convictions? A university should not issue a disclaimer regrding the scientiìfic views of one of its members. That's really unacceptable, and I am amazed that you don't agree. A university is a place to exchange views, not to censor them. Behe's views have been expressed in peer reviewed journals and in very serious books, and they are not a crime of which a university should be ashsmed, either one agrees with them or not. A disclaimer on the opening site of an university is certainly not the correct way to express a scientific opinion about anbother scientific opinion. The same is true for a colleague of Behe who felt the need to add a personal disclaimer to his personal page. That is not fine at all. There is nothing wrong if I discuss the opinions of my colleague in my site, and disagree with him. That's perfeslty fine. But again, a disclaimer is censorship, is emotional propaganda. It means: "Please, dont' associate me with this criminal, even if unfortunately we work in the same place!" That is shameful. Where have you read in peer-reviewed journals that “evolution is a fact not a theory?” (Evolution, by some definitions, is of course a fact, but the word is also used to refer to a theory.) I have. And it referred to the theory, not to the facts. But I cannot give you the reference. It was a few years ago. And yes, I’ve also heard/read Dawkins say that. I’ve also seen on this site people make equivalent statements about “evolutionists” – that we must be stupid, blind, or so wedded to our a priori materialism that we refuse to see the evidence staring us in the face. I see polemics on both sides. I don’t like either much. Neither do I. But people who write here are just people writing in a blog. Dawkins is a scientist, and the quote is from one of his scientific books. You had said: "There is no official, or undisputable, scientific explanation for anything. Who is supposed to be doing the “considering” here? Not any scientist I know." Well, Dawkins is a scientist, I believe. Do you know him? gpuccio
Regarding putting-up-fences-for-thought paranoia:
It is not that the methods and institutions of science somehow compel us to accept a material explanation of the phenomenal world, but, on the contrary, that we are forced by our a priori adherence to material causes to create an apparatus of investigation and a set of concepts that produce material explanations, no matter how counter-intuitive, no matter how mystifying to the uninitiated. Moreover, that materialism is absolute, for we cannot allow a Divine Foot in the door.
Professor Lewontin, "Billions and Billions of Demons", emphasis mine, full version here. Eugene S
Eugene, I did not mean to insult Russian scientists. I was simply acknowledging the fact that not all countries are, or have been, places were you are free to think and say what you like. One example was the championing of Lysenko's genetic ideas rather than Mendel's, for apparently ideological reasons, with disastrous results. Another was Russian psychiatry (although Russian psychology has rightly been hugely influential). In fact, the reason I added that sentence in brackets was because I am aware of your own background, and I know that I cannot assume that all posters here live in places where thought, and the expression of thought, is pretty well free. And I'd have though you would agree that with me that the suppression of free thought in your own country was to its detriment, no? Anyway, my apologies if I inadvertently insulted you and your country. I have the highest respect for Russian scientists, particularly given the repressive regime so many of them had to work under. It was not my intent to denigrate them or their achievements. Elizabeth Liddle
All, Please excuse my off-topic and passionate apologetics below. Dr Liddle, Science in Russia has suffered from things foreign to science. In USSR, it was ideology but now the high education standards (the legacy of this notoriously "bad" USSR) are deliberately being destroyed by the politicians. You have to believe me when I say that because not everything is known or can be seen from outside. An indication of this is a high percentage of immigrants from Russia who do world class science in the West even today. Now about poorer Soviet times science nonsense. All I need is a handful of counter-examples. Very quickly what springs to mind concerning just space exploration, it was the USSR who - launched the first sputnik into orbit, - sent the first animal to space - sent the first man to space in spacecraft, - sent the first cosmonaut into open space. - finally, the first woman in space was also Russian. So Dr Liddle please don't. Even now when Russia is not in its best form economically and politically, it is still fighting for its own project, the Russian cosmos and an independent way of thought in general with implications in religion, science and philosophy. It will keep doing it until its last day. Eugene S
Why should a university not issue a disclaimer? Why should people not express their views about other people's convictions? Where have you read in peer-reviewed journals that "evolution is a fact not a theory?" (Evolution, by some definitions, is of course a fact, but the word is also used to refer to a theory.) And yes, I've also heard/read Dawkins say that. I've also seen on this site people make equivalent statements about "evolutionists" - that we must be stupid, blind, or so wedded to our a priori materialism that we refuse to see the evidence staring us in the face. I see polemics on both sides. I don't like either much.
That basic protein domains can be explained as the result of differential reproduction due to heritable traits.
Can you give a specific citation? Elizabeth Liddle
But it isn't a marker that can distinguish design intervention from evolution, because it says absolutely nothing about the landscape. Petrushka
Elizabeth: It's part of the darwinists propaganda to ridiculize IDists as mere simpletons believing only that "If it looks like a duck, walks like a duck and talks like a duck, then it is a duck!". While that is not true, and ID has a lot of rigor in its reasonings, my point was that the "duck reasoning" is a perfectly correct first approach to a problem, and not something that should be considered ridicule. Again, that reversal of the natural point of view is only propaganda, and a twisting of epistemological priorities. Can you give some examples? That basic protein domains can be explained as the result of differential reproduction due to heritable traits. There is no official, or undisputable, scientific explanation for anything. Who is supposed to be doing the “considering” here? Not any scientist I know. Well, have you ever read in peer reviewed journals that evolution is a fact, and not a theory? I have. Have you ever heard Kenneth Miller, Nick Matzke and others argue that ID is wrong because trying to compute the probabilities of biological information is absurd, because someone must win the lottery, and any shuffle of a deck of cards is absolutely improbable, and yet, see, it happens? I have. Have you ever read the disclaimer put by Behe's colleagues on the site of his university? I have. Have you ever heard Dawkin's quote: ""It is absolutely safe to say that if you meet somebody who claims not to believe in evolution, that person is ignorant, stupid or insane (or wicked, but I'd rather not consider that)."? I have. And so on. I really am not in the mood to list all the shameful things made in name of the neo darwinain orthodoxy. gpuccio
Exactly, I am free to think as I like. And say what I like, at least in the parts of the world where I have lived. Eugene suggested that someone (not sure who) was putting up "fences for thought". I see no-one putting up "fences for thought" (although of course there were such fences in Soviet Russia, and science was the poorer as a result). Elizabeth Liddle
Joe (and Elizabeth): As it is known, I am an aspirant neo vitalist, so I join Joe in begging to differ about the "vital force" point. gpuccio
Suggesting that an obvious explanation is in itself a crime
Where has anyone suggested this?
daily accepts models and theories that are both inconsistent and empirically unsupported
Can you give some examples?
and considers those models and theories to be official undisputable scientific explanations
There is no official, or undisputable, scientific explanation for anything. Who is supposed to be doing the "considering" here? Not any scientist I know. Elizabeth Liddle
Elizabeth: You are free to think as you like, but believe me, your position in this case is neither acceptable nor fair. It is very much similar to accusing coloured people in the sixties of being paranoic for believing they were not treated fairly. gpuccio
Joe: I see you have anticipated my basic argument :) gpuccio
Elizabeth: If you reasd better what I wrote, you will see that in no way I was saying that rigor is not important. But the fact that some things appear obvious should however considered a significant clue and not, as it happens, the opposite. Hypothesizing that obvious explanations have a right to be seriously considered is healthy epistemology. Suggesting that an obvious explanation is in itself a crime, and requires extraordinary evidence to be taken into consideration is, IMO, bad propaganda. Moreover, rigor is beautiful, but it is beautiful on both sides. I really don't understand how your side (including you) is so exacting about rigor for any ID argument (which is fine), and on the other hand daily accepts models and theories that are both inconsistent and empirically unsupported, and considers those models and theories to be official undisputable scientific explanations (which is not fine at all). gpuccio
Nobody is "putting up fences for thought". This kind of paranoia is certainly part of the problem though. Elizabeth Liddle
GPuccio, I could not agree more. It is 100% my perception of what is happening. The only comfort is that one cannot stop fair scientific enquiry by putting up fences for thought especially in such a blind and rude way. This is a very good marker of their absolutely pathetic intellectual bankruptcy in the face of scientific findings. Eugene S
Yes and it is strange that you demand rigor yet you cannot produce any rigor for your position. BTW it does appear that living things have a vital force as there isn't any evidence that they are reducible to matter, energy, chance and necessity.
Lightning was missiles fired by the gods.
Only some people thought that
The sun, stars and planets revolved around the earth.
Yup and that was science at the time. Joe
He possibly isn't. But I think he is a) wrong when he says that there are islands south of the ribosome and b) probably wrong when he says the ribosome is an island. In either case, he is not so obviously right as to make a design inference on the grounds that the alternative is obviously wrong. And talking of "obvious", I think your appeal to obviousness is deeply flawed! The following things were "obvious" in the past: Lightning was missiles fired by the gods. God liked cleanliness, and hated shrimps. Gastric ulcers were caused by stress. Light did not have an speed limit. Atoms were indivisible. Distance was absolute. Cholera was caused by miasmas. Epilepsy was caused by possession by evil spirits. Living things have a vital force. The sun, stars and planets revolved around the earth. Now we know that: Lightning is electrical discharge between clouds and earth Diseases that can be avoided by hygiene are caused by germs. Gastric ulcers are also sometimes caused by bacteria. Light has an upper speed limit. Atoms not only are divisible but are largely "empty" and their contents have properties that are very different from macroscopic matter, right down to causality properties. Distance is relative. Cholera is caused by bacteria spread in water. Living things do not have a "vital force" and dead things can be revived. The earth is one of several planets that revolve around the sun, which in turn is just one of many suns that revolve around the centre of our galaxy, which is just one of many in a cluster of galaxies which is one of many clusters of galaxies. We have, therefore, no warrant to conclude anything on the basis of "obviousness". On the contrary, some of our greatest advances have been made by scientists who questioned the "obvious". Hence our demand for rigor. Elizabeth Liddle
F/N: In addition, I should note that functionality as specification is very important, as it is objectively observable, often measurable, and locks us to the empirical world in which function is achieved by particular, Wicken wiring diagram arrangements of particular components. All sorts of objections can be made against more abstract meanings of specification, but when such is locked down to this is what we need in place, in what configuration, to get this thing to work, that is a different ball-game. The importance of this, is seen by how hard there is an attempt to suggest that function is a purely subjective and arbitrary, question-begging thing. The 747 flies, or it does not. The D'Arsonval meter on its dashboard works or it does not. The bird lung works, or it does not, and if not the bird is dead in minutes. The ATP synthase works, or it doesn't and without ATP, you are dead. And, so forth. Sure we may be able to find cases where we can have interesting debates, but he core issue is not there it is in what works, and what does not, based on how parts are organised into functional wholes. KF kairosfocus
champignon: Besides all these problems, CSI/dFSCI/Chi_500 all leave the most important question unanswered: Given some complicated biological feature X, could X have evolved? Wrong. The correct perspectibe is: a) We have a complicated biological feature X, that looks designed to practically everybody (including Dawkins). Well, that's not really necessary, so, before you object, I correct myself and simply say: We have a complicated biological feature X. b) dFSCI, or CSI, shows me that it could not have come out as the result of pure RV. c) So, some people have proposed an explanation based on a mixed algorithm: RV + NS. d) Is that explanation working? Have those people brought any valid logical and empirical argument to support the model? The answer, IMO, is no. And I have tried to explain why. e) Design reamins, therefore, the best explanation (indeed, the only convincing one). As you can say, dFSCI or CSI are far from useless. They are absolutely necessary for point b) Can you undertsand that simple reasoning, or will you go back to your unsupported propaganda? gpuccio
"your argument depends on knowing" Nothing of the sort. We argue from empirical observations in probabilistic terms. The only sound refutation of probabilistic arguments is to demonstate the 100% impossibility of something, which I am sure you understand is extremely hard (if not impossible). If one can't do that, one should retain all hypotheses according to Bayes's logic. I am not sure about Dembski, but ID in general does not preclude anything from happening a priori. This is exactly opposite with evolutionists' vague reasoning. ID only shows from empirical observations that "natural" emergence (i.e. chance and necessity without choice contingency) of the thing in question is implausible (not improbable but implausible). What's wrong with that? Eugene S
Except that they aren't :) Elizabeth Liddle
Elizabeth: I thimk you misunderstand. You left out the final phrase in your quote: "That is you have to first get to the shores of an island of function before you can look at incremental hill-climbing." I don't think KF is speaking only of OOL here, as you seem to imply. For me, each new protein domain is an island of function, as I have shown many times and as should be obvious by the simple consideration that thsoe domains are completely separated both at the sequence level and at the structure level. That is a very simple observation, but you seem to constantly evade from it. Maybe it does not suit well your fancy ideas of gradual increase of function by differential reproduction due to heritable traits :) gpuccio
Eugene: I agree. But our darwinist friends seem not to catch that. It is maybe sad that we have to spend a lot of time and resources trying to show in a rigorous way that the obvious is obvious, but what can you do when confronted with people who say that it is not obvious at all, that thinking that "what looks designed is probably designed" is a crime against humanity, or at least that it is such a bold statement that it requires infinitely huge evidence to support it, probably in name of some specious interpretation of the Occam's razor? The problem is at the beginning: these people have decided in advance that their materialist reductionist view of reality is the only smart one. It is dogma, certainly, but it is also some strange form of intellectual arrogance, the opposite of a sincere search for truth. It seems of no relevance to these people that for millennia most deep and intelligent people, including scientists and philosophers, have easily believed the obvious, that the apparently designed functional complexity in the biological world had been designed by some superior intelligence. It seems of no relevance to these people that the amazing complexity of that functionality has been constanctly increasing in an exponential way in the last few decades, with the advancements of our technology and of our understanding. It seems of no relevance to these people that the only rational non design explanation of viological information, the neo darwinian algorithm, is completely full of logical and empirical holes as soon as you analyze it with a minimum of impartial scientific method. No, nothing of that matters. Because admitting any part of that would imply introducing some doubt in the materialist reductionist model of reality. It would imply that these people, after all, could not be the only smart people in the world. And they don't like that. Better to name the non believers "IDiots", or something like that. Please excuse my outburst, but sometimes it's fine to say things as they are. gpuccio
champignon: But dFSCI isn’t applicable to the “RV part” either. The “RV part” like the “NS part”, doesn’t look for a predefined target, nor does it proceed by blind search. dFSCI assumes both, so it is inapplicable. What is your problem, champignon? I have answered the problem of the "predefined target" in my post 23.1.2.1.1. Have you read it? If you don't like my arguments, please comment on them. And what do you mean by saying that the RV part does not proceed by blind search? First of all, I have never used the term "blind search". Anyway, I can't undertand what you point is. A system where RV happens, and is the only kind of variation (the "RV part of the algorithm") generates new states in a random way. If those new states at some point include the functional state we observe as the final result, we can certainly say that RV has produced that functional result, and compute the probabilities of that explanation. It is a random search, or a random walk, according to the model we imagine, but the probabilities are very similar. As usual, your objections are vague and irrational. And of course evolution is a “necessity algorithm coupled to a random part.” So your argument depends on knowing that the feature in question could not have evolved. But that’s the main question we are trying to answer! Is that an argument? Of course that's the main question we are trying to answer. And I have been trying to answer it in many long and detailed posts. You seem to imply that my reasoning is: dFSCI exists, therefore biological information is designed. It's not that way. If you had the patience to look at my many posts on modelling RV and NS in biologicla systems, you would maybe admit (but perhaps I am an optimist!) that I have made many more detailed arguments in favor of the design inference. Most of them have to do with the credibility, and possible consequences, of the NS part. Other deal with the main objections to the probabilistic evaluation of the RV part by the concept of dFSCI, including your objections. So, either you go at the same level of discussion, commenting in detail about my arguments, or you just say: I am not convinced, and we remain good friends. gpuccio
KF, The issue is "... to find the target zone". True. But I would also add that the primary issue is how to make nature find anything. The question is, what is it that makes nature "interested" in improving utility? It is cybernatic control steering the system's behaviour through a particular sequence of states. All deliberations about how evolution works must be preceded by a serious discussion of why it should work at all in the first place. There is a huge difference between say convection currents or sand dune patterns on the one hand, and an algorithm formally utilising control on the other. Cybernatic control itself is a empirically reliable indicator of purposeful design. Eugene S
"too complex". d'oh. Elizabeth Liddle
Actually, you seem to have missed the key issue, never mind how often it has been stated and explained: before you can have incremental improvement in functionality depending on specific configurations, you have to first get to functionality.
Yes, before you can have Darwinian evolution you have to have a self-replicator. Your case seems to be that the minimal Darwinian-capable self-replicator is two complex to have occurred "by chance". We disagree. We think see no reason why the first Darwinian-capable self-replicator could not have been simple enough to have occurred in appropriate conditions in early earth, although we do not yet know how. But nor do we know enough to rule out out, therefore we cannot infer with confidence that it did not. Hence no "design inference". Elizabeth Liddle
gpuccio,
dFSCI is based on the probability of finding a target by blind search. And I apply it only to the modellyng of the RV part of the darwinian algorithm.
But dFSCI isn't applicable to the "RV part" either. The "RV part" like the "NS part", doesn't look for a predefined target, nor does it proceed by blind search. dFSCI assumes both, so it is inapplicable.
if an object exhibit dFSCI... and if no known necessity algorithm can help explain it, not even coupled to the random part, then design is the best empirical explanation for it...
And of course evolution is a "necessity algorithm coupled to a random part." So your argument depends on knowing that the feature in question could not have evolved. But that's the main question we are trying to answer! champignon
It tells us the probability of that function to emerge in a random way.
Well, no, it doesn't. Or rather, as "in a random way" is unspecified, what it tells you is meaningless. You need to construct the probability distribution under your null. That's what I keep asking for, and that's what nobody provides! (Not surprisingly, as it isn't possible in this instance.) Elizabeth Liddle
Petrushka: I am obviously happy that Thornton is researching the problem. Could you please point to some result that you consider important? gpuccio
Petrushka (31.1.1.1.1): It tells you exactly nothing useful. It doesn’t establish the history of the sequence, because nothing is done to check whether there are nearby functional sequences. It tells us the probability of that function to emerge in a random way. That's all it is required of it . That's all I have ever required of it in the context of ID theory (for another possible utility of dFSCI in another context, please see my previous answer to you). Why now this sudden enthusiams from you to find other, different functions for dFSCI? :) Establishing the history of the sequence is a task of researchers, not of dFSCI. As you correctly say, dFSCI is not useful for that. It has to be done by other scientific methodologies. And researchers have not been able to extablish any "history of the sequence" for basic protein domains. So, as I have always said, I will always take into account the history of the sequence, if and when it is found. If no history is found, I still have to explain the sequence. Please, don't go back to argument such as: some history could be found, in the future. As said many times, for me that is not a scientific argument at all. If it is for you, and if you are happy in defying Popper, I am happy for you. gpuccio
Petrushka (31.1.1): It isn’t necessary for the algorithm to understand what it is doing in order to make new things, only that some of the new things contribute to the efficiency of making new things. It is not necessary for the algorithm to recognize function or be consciously aware of anything. It is only necessary that it make assemblies of things that haven’t existed before. No. It can make new functional things, but it is the conscious observer that will understand that they have a function, and will use them in the right context. Simply "generating" a potentially functional object is of no use, unless the object s recognized as functional and used to implement its function. IOWs, in an algorithm a tree can fall across a stream, but nobody would pass on it. Any non trivial organization of functional objects need much more than just the existence of the object. The object must be used to implement its function, usually in a complex context. A functional sequence generated by an algorithm without its recognition of its possible function will be, for the algorithm, a simple output, similar to any other. If it has not programmed to recognize that function, it will never know that it has generated a functional object, and it will never put that object to any use. That's why we use algorithms that can find functions that we can recognize and ude: a conscious judgement is both at the beginning and at the end of the process, and gives sense to it. Your other comments are correct: dFSCI is not useful for those things. Its function is another, and I have shown it. Indeed, I have shown two kinds of function for dFSCI: being an empirical marker of design, and helping to evaluate the structure function relationship of specific proteins. The second is possible only because, unlike you say, dFSCI is not "a simple transform on sequence length". Sequences of the same length can have very different dFSCI, and it will be very interesting to study how those "length adjusted" differences relate to different kinds of proteins and of protein structure and function. gpuccio
kairosfocus,
...before you can have incremental improvement in functionality depending on specific configurations, you have to first get to functionality. That is you have to first get to the shores of an island of function before you can look at incremental hill-climbing.
You are assuming that the search space consists of separated islands of function, but you haven't justified that assumption. See this.
the issue is not to hit a particular target outcome, but to find the target zone of outcomes giving a particular function, in a wide field of possible configs, W.
No, evolution doesn't care about any particular target, any particular target zone, or even any particular function. It will exploit any fitness-improving function it stumbles upon.
W is sufficiently large to swamp the available blind — non-purposive — resources of the solar system or the observed cosmos.
Yes, if you tried to search W exhaustively. But evolution doesn't do an exhaustive search, so the criticism doesn't apply.
We do not need to calculate a precise probability to see that on sampling theory, we are looking at a needle in a haystack search challenge here, an insuperable one on the available resources in our solar system and — at a 1,000 bit threshold, the observed cosmos.
Again, it's not a needle-in-a-haystack search since there is not a single pre-specified target. Given all these flaws, can the concept of CSI/dFSCI/Chi_500 be salvaged? I don't think so. To do so, you'd have to characterize the search space in terms of all possible functions, not just one. So the "target zone" would be all functional sequences. That's clearly impractical, if not downright impossible. And even if it were possible, it wouldn't be enough, because the issue is not the relative size of the target zone and the search space -- that would be true only if we were doing a blind search. Evolution works by transitioning through successive functional intermediates, not by exhaustively sampling the search space. The important question is how well-connected the functional space is, not the ratio of target zone to search space. Besides all these problems, CSI/dFSCI/Chi_500 all leave the most important question unanswered: Given some complicated biological feature X, could X have evolved? The debate remains exactly where it was before the concept of CSI was introduced. CSI has contributed nothing. champignon
KF: Thank you. I always appreciate your support and contributions. gpuccio
Petrushka: As I have said many times, dFSCI is computed for each explicitly defined function. Let's take your example, and let's discuss a similar specific case, like S hemoglobin selected by malaria. Here you can see clearly how I can define a function for hemoglobin, that has high dFSCI (all the functional complexity needed to have a molecule with the biochemical properties of hemoglobin). At the same time, I can define a function like: conferring to a human being some genetic protection from malaria. The dFSCI of this function is very low (4.3 bits, because a single AA substitution i the hemoglobin molecule can do the trick; indeed, it is lower still, because other mechanisms can probably get a similar result). So, as you can see, there is no problem. You can alway compute dFSCI for a digital functional object, and for a defined function. Sometimes it is easier, sometimes it is more difficult. But it can always be done. gpuccio
lastyearon: Sometimes the behavior of warm moist air is to rotate at high velocity. When this happens, we call that a tornado. Does that mean that the function of the warm moist air is to rotate fast? You can certainly define a function of "rotating fast". Many systems will be able to implement it. Most of them are simply algorithmic, and can be explained from the law of physics, sometimes with some random seed. Nothing of that implies a complex function. You still don't understand that, according to my definition, you can define any function you like, for any object you like. It has no implications per se. The only requirement is that you candescirbe it objectively, and measure it (for instance, you can measure the rotation). It's only functions that require high complexity that allow a design inference. Before you scoff at the analogy between a tornado and a living entity, tell me what the difference is between the enzyme in your example and the warm air in mine, without assuming your conclusion that life is the result of intelligent design. I am not scoffing at anything: you are correct, there is no difference between a tornado and a living entity: you can compute dFSCI (or simply FSCI, if the object is not digital) for both systems. My procedure does not require living entities: it can always be applied. Obviously, as I have already said, the tornado does not exhibit FSCI, bevause its function, however defined, can easily be explained by the laws of physics and some perfectly accessible random variation. gpuccio
Scott: Well, I am happy you see what the problem is, unless we are very very careful in our terminology and definitions :) gpuccio
champignon: Having reread a bunch of KF’s and gpuccio’s posts and comments, it’s sinking in that the “dFSCI” and “Chi_500? metrics are nothing but repackaged versions of Dembski’s “CSI” metric, retaining all the flaws of the latter. There is no doubt that the essential ideas are the same as in Dembski. I have never said anything different. But it is also true IMO that there are some small, but important, differences in my definitions and procedures, and in my specific application of them to the biological case, that allow to make a more complete and detailed argument for biological information. I must say that you have probably no reread enough of what I reacently wrote, or you have not understood it, otherwise you would not make the points you do in the rest of the post. 1. CSI is based on the probability of finding a target by blind search. Evolution does not work that way. Answer. dFSCI is based on the probability of finding a target by blind search. And I apply it only to the modellyng of the RV part of the darwinian algorithm. I consider and model NS separately. See for instance my last post in the series many times linked in the recent discussion. For your information, I qupte here from my post 23.1.2.2.11, that quotes from a previous post, all of them in response to you: I have said clearly, in response to you (my post 23.1.2.1.1): “dFSCI is used in my analysis only to evaluate the possibility (or empirical iompossibility) that a certain functional result may have emerged in a random way. The necessity part of the neo darwinian algorithm, NS, is always present in my discussions, but it is evaluated separately. You can look, if you want, to my posts here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/comment-page-2/#comment-413684 (posts 34 and following) So, how many times must I say one thing in response to you fro you to read, or understand, it? In particular, you will find a very detailed answer in the first of the quoted posts, 23.1.2.1.1 2. CSI assumes there is a specific target with a specific function. Evolution doesn’t aim at specific targets. Any variation that improves fitness will be favored by natural selection, regardless of what its function is. Whatever works, works as far as evolution is concerned. Again, I have fully answered that point in the same answer to you quoted above, post 23.1.2.1.1 Please, reread that post and answer those arguments, if you want. We have to give some order to this discussion. if we want to proceed. 3. CSI assumes that design is the default. In other words, if “chance and necessity” can’t be demonstrated, in detail, for a particular phenomenon, then Dembski claims that we are entitled to assume design by default. Again, I don't make the point in the same way. Dembski in some way starts with the assumption that chance, necessity and design are the only possible explanations of an event. While that is certainly true, still I don't assume it. I rely completely on the concept of empirical explanation, not of logical explanation. So, my reasoning is very simple: if an object exhibit dFSCI in a specific context and with an appropriate threshold (which rules out it emerging in a random way in that system), and if no known necessity algorithm can help explain it, not even coupled to the random part, then design is the best empirical explanation for it, unless someone can explain it in a different way. So, any credible and reasonable explanation that is not chance, necessity, amix of the two, or design is definitely welcome. Do you want to try? :) 4. Dembski glosses over the fact that if he wants CSI to definitively indicate design, what he calls “the chance hypothesis” has to encompass far more than mere blind search. It has to include all non-design ways in which the phenomenon in question could have arisen. He is not entitled to assume, for example, that evolution could not have produced the phenomenon. That, after all, is the very question that CSI is supposed to help us answer! Again, that is not a problem in my reasoning, if you have read my posts. As repeatedly said, I use dFSCI only to model the probabilitites of getting a result in a purely random way, and for nothing else. All the rest is considered in its own context, and separately. In my last post in the referred series, I also how how it possible to compute a final probability by assuming an important selection event for an intermediate. So, instead of drawing unwarranted conclusions, please stop criticizing Dembski, and go back to criticizing me, because in this moment I am your interlocutor. gpuccio
Ch: RE: CSI is based on the probability of finding a target by blind search. Evolution does not work that way. Actually, you seem to have missed the key issue, never mind how often it has been stated and explained: before you can have incremental improvement in functionality depending on specific configurations, you have to first get to functionality. That is you have to first get to the shores of an island of function before you can look at incremental hill-climbing. the issue is not to hit a particular target outcome, but to find the target zone of outcomes giving a particular function, in a wide field of possible configs, W. W is sufficiently large to swamp the available blind -- non-purposive -- resources of the solar system or the observed cosmos. Doesn't matter if there are many such islands of function, T1, T2, T3 . . . Tn, and whether there are possibilities that they may even move about a bit, the problem is to find deeply isolated islands of function in vast config spaces, where the resources to sample the spaces without foresight and purpose, are going to have you sampling fractions of order 1 in 10^48 or less. We do not need to calculate a precise probability to see that on sampling theory, we are looking at a needle in a haystack search challenge here, an insuperable one on the available resources in our solar system and -- at a 1,000 bit threshold, the observed cosmos. In short, you have set up a strawman based on begging a crucial prior question, and knocked it over imagining that you have dismissed the problem. In addition, you have a much more direct challenge to answer to: as a practical matter, on quite reasonable steps, we can get values of Chi_500. Consistently, where such values exceed 1, the items have been designed, even when the design is not recognised by the one who posed the challenge. In short, we have massive empirical confirmation of the empirical reliability of FSCI as an index of design. When that happens, you then have the problem of providing a counter-example to overturn the base of empirical observations that establishes an inductive generalisation. Thirdly, you have a challenge in your confident declarations of how "evolution" works. That implies a claim to know something as a matter of empirically observed fact. But, while we know about micro cases of adaptations of existing functional body plans, we have no observational knowledge of the origin of body plans by darwinian chance variation and differential reproductive success in observed cases. All is inferences, piled on top of what now seems to be an a priori materialism that blocks consideration of an otherwise credible alternative. To focus the matter specifically, kindly tell us the observational basis for explaining the origin of the avian lung on chance variation and natural selection, that allows us to say that this is accounted for as an empirical fact on darwinian evolutionary mechanisms. Or, failing that, kindly provide same for a similar complex integrated body plan element or organ system. If you cannot do that, you cannot simply extrapolate from small scale variations within a body plan, to the origin of such a body plan. GEM of TKI PS: It is not just that X may be complex, but that X is dependent on a functionally specific config of specific components, leading to a case of functionally specific complex organisation and related information (often not just configurational but actually prescriptive, algorithmic info in a code). That is what gets you to a question of observing cases E from a zone of functionally specific complex organisation T, in a field of possibilities W such that the zones T are islands of function in vast seas of non-function; insuperable for blind searches on the gamut of our observed cosmos. To see what this is about, kindly go to comment 1 here in this parallel thread, and ponder the nanobots thought exercise that looks at in effect origin of life and origin of novel body plans on altering of prescriptive configurational info. What has been happening on this subject since Darwin is, "how dare you suggest" that it is a possibly insuperable challenge to the power of chance and necessity to find such zones of function, without providing actual observed cases in point. The hiving off of OOL as a separate challenge is in fact the most revealing single illustration of the problem. With no chance + necessity only explanation backed up by direct observational warrant, there is no root to the tree of life used since Darwin, and in addition, we know the fossil record is one of sudden appearances, stasis, and gaps, across the world of life. kairosfocus
UB: Sorry, but I may be missing the deeper question. What do you think it is? Do you notice, that I am saying that this is a research issue to build up a base of key cases, to be extended incrementally? We can investigate by noise bombing, where the credible borders of zones of specific function T in wider config spaces W lie. We can do so using accepted and known methods or at least reasonable ones, and we can use the equivalent of a jury of peers to help us define objectivity. Already, we know that something like Pinatubo is not in itself based on functional -- especially prescriptive -- information [an algorithm carried out by machines], but on the dynamics of plate tectonics and weak spots in the crust, so that hot magma intrudes and builds an edifice. The geology makes it an explosive type volcano, I do not know if it is in the andesite or dacite range of SiO2, probably andesite. As the current thread on the sequence diagram for the ribosome shows, proteins ARE based on prescriptive, functional info used by machines to assemble a product. It so happens that this marvel of miniature manufacturing is in the living cell. There is but one known source of that sort of system architecture, design. So much so that an unbiased mind would infer to that explanation as the obvious default. To explain it as otherwise caused, one needs some observational warrant to show that forces of chance and necessity acting under reasonable initial circumstances can produce codes, algorithms, effecting machines, storage media with the codes, maintenance machines to keep the codes intact, and can then put the products to work in required contexts separate from the manufacturing process -- note here the kinesin walking trucks and the network of internal microtubule "highways" that allows dispatching of proteins to sites. When we look at the actual arguments being presented, we are not being presented with empirically verified mechanisms, that have been shown capable of giving rise to such on chance and necessity, but too often the imposition of ideologically loaded redefinitions of science based on a prioris. Indeed, question-begging (which is the context of the attempts to play at turnabout and project question-begging unto design thought). Let's go back to basics: design theory is the scientific investigation of signs in nature that, per induction on observation, reliably point to design. FSCO/I happens to be one such, especially in the context where we have coded algorithms that are physically instantiated using appropriate effecting machines. The number of cases where such has been seen -- observed -- emerging spontaneously from matter and energy being shaped by blind chance and necessity is . . . ? ZERO The number that such has been seen emerging from design is: thousands, with copies all around us. So, we have a credible empirical sign that we have every epistemic right to see as reliable, and as pointing to design. As of such, it is those who object who need to show otherwise. Science, after all, is not mathematics, and does not essay to provide proofs beyond doubt or even reasonable doubt [to moral certainty], but on the preponderance of inductive evidence leading to empirically reliable results. KF kairosfocus
LYO: Kindly, look at an animation of ATP synthase in action, or the flagellum, or kinesin, or the protein synthesis process. These animations, are based on scientific evidence, and objectively manifest functions that are based on complex, specific organisation of parts tracing to digitally coded, stored information. Or, are you doing the equivalent of putting scare quotes around terms because to acknowledge that we really are dealing with codes, information, and complex functional organisation makes the matter of what best explains such all too plain? P: It is possible for an entity to have two possible functions, and for the complexity to achieve the one to be different from the complexity to achieve the other. A book can serve as an information device, as fuel for a fire, or as a door stop if heavy enough. It can even prop up something. That something may be adapted to a different function, even with modest changes, does not explain the key issue: the origin of complex function. Going further, proteins serve operational functions in organism A; which on being consumed by organism B, makes the proteins serve as food. Third, a protein may indeed serve as an enzyme for a type X context and with slight modifications it will serve for type Y, e.g. it now can digest nylon. But, by highlighting effects of modest changes (and recall, nylons are not that far from naturally occurring molecules) we have not accounted for the origin of a coded, assembled complex entity that folds to key-lock shape, then fits into the processes of the metabolic etc networks of the living cell. All this has done is to distract form the question: how does one get tot he shoreline of an island of function. EL: A thermometer in a landfill is still indicating temperature on a scale, providing it was not broken. Just, there is no-one to read it. LYO: We can define various functions for a tornado or a hurricane, but these, per the explanatory filter, are explained on forces of chance and necessity and are not information-bearing or based; these are low contingency objects -- the circumstantial conditions at initiation and along their paths determine behaviour on mechanical necessity and bulk properties emerging from the statistics of molecules; by contrast with what happens when we toss 500 coins and they come up at random, or if we set the coins to read out the ASCII code for the first 72 characters of this post. It is when we measure parameters associated with the hurricane or tornado that we generate information about its properties as it exists. On that information, we may then model the tornado or hurricane in action. By sharpest difference, the sort of complex, specified function we are looking at is highly contingent, i.e. under given initial conditions, a large number of possible and distinct outcomes are possible similar to tossing a die or setting it to read a value. I am beginning to get the impression that definition games are being played to be evasive. The problem with such games, is that they are self-defeating. NOTHING can be defined completely, that is obvious from how in mathematics, we trace back to primitives accepted as intuitive without further specific definition. Otherwise we end with an infinite regress and can go nowhere, or run in circles and go nowhere. Before we get that far, we have any number of key concepts that we have to work with that we have to accept definition by examples and family resemblance, with perhaps a general description of the sort of ting, as opposed to a precising definition that states necessary and sufficient conditions. For instance, kindly define life in a comprehensive way. No such definition exists, but we live with the limitations of pointing to key examples and giving generic descriptions, with family resemblance. So, to make the sort of objections that seem to lurk under much of what is above, is to be selectively hyperskeptical. Perhaps, that is inadvertent, not having had to do the sort of push the limits back on other fields, and not being taught that this is a general problem and limitation on our analysis, you are looking at a case you are inclined to challenge and see a general problem as specific to that case. Only, it is not, so it is wise not to saw off the branch on which one sits. We cannot come up with a comprehensive definition of life that will cover the cases neatly and sweetly. Does that mean that biology is therefore invalid, and life is an illusion? The question answers itself, and points out that we are at the threshold of yet another self-referential absurdity; if we were to be consistent in our objections. So, yes, in the end we have limits on our definitions and models. What's new about that? Does that mean that the concepts, models, and measurements we see are meaningless? If you argue yes, that points back to the issue that we are living, and life is a concept that defies definition and measurement. Are you prepared to argue that life is meaningless and useless? KF kairosfocus
GP: Very well made and put point. KF kairosfocus
Thanks for that explanation, GP. What usually comes to my mind when I think of algorithmically generated complexity is pseudo-random number generators. A very simple example is this function: X_(n+1) = (aX_n + c) mod m. This imperfect linear number generator will generate a number string of up to length m before repeating. Seeming complexity from a simple algorithm, yet nothing functional beyond the algorithm itself. material.infantacy
My point is that dFSCI adds no information that is not found in sequence. It just adds a bit of mumbo-jumbo tohide that fact. It says absolutely nothing anout the history of the sequence because it includes no information about the functionality of nearby sequences. If it doesn't say anything about sequence history, it says nothing useful. You can't claim a sequence is isolated if you haven't done the research. The people doing this kind of research include Thornton. Petrushka
It doesn’t tell a designer how close he is to having a functional sequence.
Why would anyone do that? If I'm designing something I can't imagine a less useful way of determining attainment of success than measuring my own dFSCI. I can't get my head around what you're picturing. Someone is designing a protein for a specific purpose, they can't tell if it's done or not, so they attempt to measure its dFSCI? I understand, knowing whether a designed protein will perform a particular function is impossible, because no one knows how and that's the definition of impossible.
You don’t bother checking to see how or why it’s functional, or whether it’s useful or detrimentak
Do you have a specific example in mind of someone calculating dFSCI for something with no apparent function? Do you not consider at least some of the functional-looking things that may or may not be functional going on within your own cells to be useful? I might as well run with this. Do you consider yourself functional, or an illusion of function? If confronted with the possibility that IOFs (Illusion Of Function) within your own cells might imminently cease, would you care? If our cellular processes are not truly functional, then are we? Are the functions we design actually functional? Is there any such thing as functional? There's a pattern here. Rather than explain any of the amazing things that demand explanation, the goal is to level them, reduce them to ordinary. They display the illusion of function, design, and purpose, but when we look past the illusion they are no more remarkable then rocks or water. It's late, so I may be rambling. But I perceive a borderline Orwellian objective to brainwash everyone to believe that nothing which appears special or purposeful actually is. It's like going to war with nothing but rocks, so you try to convince the other side to lay down their guns and pick up rocks to even the field. Among our big gun are our ability to distinguish the functions that facilitate our own existence from non-function. You might just have a fighting chance if you could convince people to lay down their guns, to deny their own thinking ability. This is nothing short of mental darkness, and our first line of defense is not ID, but to listen to our own voice of reason telling us that crap doesn't just come together for no apparent reason and put on a song-and-dance show of appearing to have function it really doesn't. It's too complicated to have evolved. I said it and I'm proud of it. ID is a lifeline thrown into a pit for people who don't really want to get out of the pit. If it's foolish, that's why it's foolish. If people wanted a lifeline out of the pit they wouldn't be trying so hard to dig deeper and pull everyone else in. Sorry for venting. And despite the tone my intent is not to offend It's late in the US and Europe so perhaps no one is up to read this anyway. ScottAndrews2
As promised, my reasons for rejecting CSI (and dFSCI and Chi_500, which are really the same) as a reliable indicator of design: 1. CSI is based on the probability of finding a target by blind search. Evolution does not work that way. 2. CSI assumes there is a specific target with a specific function. Evolution doesn’t aim at specific targets. Any variation that improves fitness will be favored by natural selection, regardless of what its function is. Whatever works, works as far as evolution is concerned. 3. CSI assumes that design is the default. In other words, if "chance and necessity" can't be demonstrated, in detail, for a particular phenomenon, then Dembski claims that we are entitled to assume design by default. 4. Dembski glosses over the fact that if he wants CSI to definitively indicate design, what he calls "the chance hypothesis" has to encompass far more than mere blind search. It has to include all non-design ways in which the phenomenon in question could have arisen. He is not entitled to assume, for example, that evolution could not have produced the phenomenon. That, after all, is the very question that CSI is supposed to help us answer! It's quite pitiful, when you think about it. Ever since Darwin, people have tried to argue that "X is really, really complicated; I'll bet it couldn't have evolved." CSI was supposed to give us a reliable way of identifying design. Instead, Dembski has simply defined CSI to stand in for "really, really complicated" in that 'argument', where "really complicated" means "couldn't have been found by blind search." Well, duh. The real question, both before Dembski and after, is "could it have evolved"? CSI (and dFSCI and Chi_500) have changed nothing. champignon
Oops. Accidental post. I'm still writing; I'll post the full thing soon. champignon
As promised, my reasons for rejecting CSI (and dFSCI and Chi_500, which are really the same) as a reliable indicator of design: 1. CSI is based on the probability of finding a target by blind search. Evolution does not work that way. 2. CSI assumes there is a specific target with a specific function. Evolution doesn't aim at specific targets. Any variation tha champignon
OK, so you start with something functional. You don't bother checking to see how or why it's functional, or whether it's useful or detrimentak, or whether there are nearby sequences that might be more useful, and you for certain don't have any clue as to it's history, and you establish the sequence length and do a bit of massaging on the sequence length, and you have dFSCI. It tells you exactly nothing useful. It doesn't establish the history of the sequence, because nothing is done to check whether there are nearby functional sequences. It is less than useful for any potential designer, because the dFSCI of a functional sequence with one base pair changed is likely to be zero. It doesn't tell a designer how close he is to having a functional sequence. So basically it is sequence length times two point something. Petrushka
Petrushka, You are correct in that we do not take a tool used for one thing and then try to use it for something totally different. dFCSI is for determining design in an observed object/ structure/ event. That is it, Joe
Well you had better have some evidence to go along with your "explanation". Ya see we do have evidence of intelligent agencies producing CSI. What we don't have is any evidence for stochastic processes doing so. Joe
The pi analogy is intriguing, but chemistry is constructive. An algorithm that assembles new chemical combinations can make things that never existed before the algorithm. It isn't necessary for the algorithm to understand what it is doing in order to make new things, only that some of the new things contribute to the efficiency of making new things. It is not necessary for the algorithm to recognize function or be consciously aware of anything. It is only necessary that it make assemblies of things that haven't existed before. Whether this is possible is an empirical question, not something that can be decided by mathematics or philosophy. dFSCI is completely lacking in heuristic utility. Since it can only be applied to objects known to have function, it cannot be an aid to design, as canother physical metrics. It doesn't tell you how close you are to function; it doesn't tell you anything about the neighborhood of function, or whether slight changes would increase or decrease function. It's just a simple transform on sequence length. It doesn't say anything about utility. It doesn't say anything about multimodal utility -- whether a functional sequence might have other functions in other contexts. It doesn't tell you if a reduction in function might provide a rise in utility. Petrushka
Having reread a bunch of KF's and gpuccio's posts and comments, it's sinking in that the "dFSCI" and "Chi_500" metrics are nothing but repackaged versions of Dembski's "CSI" metric, retaining all the flaws of the latter. I understand why eigenstate felt bamboozled. There's nothing new here, and certainly nothing that can objectively identify design. I'll explain what's wrong with Dembski's CSI concept (and dFSCI and Chi_500, since they're really the same thing) later today when I have more time. champignon
material.infantacy (and Petrushka): An example that I like very much, and that clarifies a little the problem of dFSCI, is the case of an algorithm computing the decimal figures of pi. Now, here we have a specific mathemathical function (giving the ratio of a circumference to its diameter). The function remains the same, but the complexity of the output can be made apparently as long as possible by adding new figures. But is it really so? In reality, the true Kolgomorov complexity of the system is the complexity of the algorithm (and we should porbably add the complexity of the minimal system that can run it). At that point, even if the system outputs 10^200 figures, the dFSCI cannot increase. THat's one reason I say that algorithms do not really output new dFSCI: indeed, if an algorithm can really explain an output, the complexity of the output becomes the complexity of the algorithm (at least, if the second is lower). But there is another reason wht algorithms cannot output new dFSCI. Algorithms cannot conceive of a new function, nor recognize it even if it were randomly created. The conception and recognition of function is a prerogative of conscious beings, because it requires a feeling, and therefore a purpose. It's not a case that one of our interlocutors, in a recent post, has said that function defies formalization. It is true. So, an algorithm can treat as function only what has been indicated as function in the programmong. Or it can fincd new functions by random search or by computing, but if the function is really new, if no refernec to iy as a function, not even indirect, has been given to it in the programming, it will never recognize that function as a function. So, if we define new dFSCI as new complexity supporting a new function, algorithms will never output it, not only for complexity reasosns, but also because they cannot recognize a new function. To do that, they would need exactly what is impossible: an universal formaliztion of the concept of function, that allow them to recognize a function wherever it emerges. But who will explain an algorithm what it means that "something is uselful to achieve something else"? Who will program in them waht it is like to have a purpose, not the simulation of a specific purpose, but a generic purpose, a desire. something that gives joy and cognitive satisfaction? gpuccio
Petrushka: I post again here my post 23.1.2.2.19, in case you have not read it: "Petrushka: On another thread, I have given the link to the Wikipedia page about probabilistic distribution, 18000 characters long, as an example of text that certainly has more than 1000 bits of functional complexity, according to my demobstration, and therefore allows a safe design inference. How do you believe that text was written? By evolution?" By the way, this is the link: http://en.wikipedia.org/wiki/Probability_distribution gpuccio
material.infantacy: Thank you, I appreciate very much. When I say I appreciate deeply your contributions, I am not just being kind. Your analysis of some of the most common communication strategies of our most kind interlocutors is impeccable. I am looking forward to your comments on those posts of mine, where maybe I have suggested some new ideas: having feedback, both from friends and "adversaries", is certainly the best way to test them. gpuccio
"The 10^150 barrier is routinely breached by GA software. It depends on the attributes of the space, not on the size."
Sure, as it is breached by more efficient algorithms as well. But my comment was in reference to the traversal of 10^150 sequences, not whether you can find a known sequence in a sorted list, semi-sorted, or otherwise non-randomly arranged search space. Give me a target sequence, and a sequentially sorted list of 10^150 sequences (more realistically a lexicographically indexed algorithm for determining sequences) where the sequence relationship can be modeled with an a < b function, and I'll find it in 500 attempts or less. That's because a binary search on a sorted list can reliably perform at O(log2(n)). A genetic algorithm that doesn't employ a binary search strategy (or even one that does) will likely not do as well. However, generate the sequence distribution randomly, and provide only binary feedback via the fitness assessment, and you're stuck making 10^150 attempts to have a better than even chance of finding the function. "Anticipation denotes intelligence." Genetic algorithms that reduce search spaces, or employ knowledge about a target, denote intelligence. So on the one hand, we have blind processes, which themselves have no known mechanisms for sorting, or for non-random searching -- because they don't know what there searching for. On the other hand, we have intelligence, which is known to cull vast search spaces by way of artifice, to find desired solutions. If we observe effective algorithms employed within the mechanisms of variation and reproduction, which perform better on average than a random search, then we have indications of intelligence. It is those very mechanisms which require explanation. Marveling at the capabilities of complex, integrated, irreducibly complex functionality, expressed as a self-contained information processing, self-replicating system, and then slapping the "evolution" label on the capabilities of that system, doesn't tell us anything about what blind processes are capable of, and it certainly provides no clue as to the origin of core necessary proteins. material.infantacy
23.1.2.2.23
Evolutionary algorithms are the only process known to be able to navigate huge search spaces.
I guess you mean efficiently navigate. Could you elaborate on this please. What about this? Eugene S
P:
The 10^150 barrier is routinely breached by GA software. It depends on the attributes of the space, not on the size.
Just a quick reminder: 1 --> GA's are intelligently designed, and start within islands of function described by the fitness function. 2 --> More or less, the then do some hill-climbing, and in doing so 3 --> They express intelligently built in info. So 4 --> they are not creating de novo FSCI out of chance and necessity. ____________ In short, this is not a proper answer to the challenge, as has been pointed out over and over, already. Did you hang around for Mung's vivisection of Schneider's ev when the sock-puppet Mathgirl was pushing that one? KF kairosfocus
It depends on the attributes of the space, not on the size.
So you do realize this. Does it not follow that GA searching one space is no indication of whether one could search another? The very act of writing a GA requires that one must already have an expectation of what sort of solution is expected. It follows, then, that the ability of a GA to achieve one solution has no bearing on whether it could achieve another. The "moves" by which the GA traverses a space are limited to the options given it by its designer. Take the traveling salesman GA as an example. Will it ever determine that a given city on its route is inaccessible? Never. That each city is accessible is a given. The programmer of the GA could deliberately simulate a city that the traveling salesman cannot reach, such as one situated on the moon, but why bother? The conclusion that the GA will not reach it is foregone, just as is the conclusion that the GA will reach all reachable cities. This makes two things clear: First, observing that a GA traverses the space it was designed to traverse is not an indication that another space or every imaginable space is traversable by some sort of GA. Second, that its designer must provide it with every possible move or alteration it can make, indicates that regardless of how many possibilities are available to it, it can never, ever innovate. The traveling salesman GA will never invent a telephone to call customers rather than visit or build a bulldozer to pave new roads or a rocket to visit a city on the moon. Unless, that is, the programmer has already included such options, in which the GA still is not innovating. A GA may arrange a circuit according to the possibilities given it to find the most surprisingly effective configuration, perhaps exceeding "direct" human design, but it will never do something other than design that circuit. That's also why my Roomba cleans the floor more efficiently than I would but has yet to climb the stairs or dust the ceiling fan. So why would anyone think that the ability of GAs to traverse the space they were specifically created to traverse is an indication that another space is traversable or that an evolutionary process would find some innovative way to reach beyond the space it is able to traverse? This seems so obvious that I'm astonished that anyone would even bother to bring them up in relation to biology. Their pre-specified capability (which also requires both intelligent design and an a purposeful expectation) and the observation that they never surprise anyone by doing something they weren't intended to make exactly the opposite of the case that you would wish. The only hope for biological evolution is that it would behave differently from a GA. So why use them as an example? ScottAndrews2
I can make this clear by a direct analogy: The model for a disconnected sequence space is a cryptogram. There is no way to navigate incrementally to a solution. So GAs and evolutionary algorithms cannot break modern encryption. Only brute force. We consider a 128 bit key to be pretty safe, but supercomputers can break it with brute force. At some key size, the resources of the universe cannot break it. When you argue that DNA sequence space is disconnected and cannot be navigated incrementally, you are saying it is equivalent to a cryptogram. You then assert that something called "intelligence" can break it. I'm sorry, but I don't buy it. I am not aware of anything intelligence brings to the table that enables breaking a modern cypher. Older cyphers are connected spaces and can trivially be broken by GAs. So the problem for both evolution and ID is to characterize sequence space. Douglas Axe has notice this and has made what I consider to be an interesting attempt. I don't buy his conclusions, but I accept his characterization of the conceptual problem. Petrushka
So we make an inference to design, because a function in a huge sequence space can only be found if it’s known to exist, or that it likely exists, or that it possibly exists. Intelligence is the only empirical entity which is known to cull vast sequence spaces in order to effectively reduce search times.
Evolutionary algorithms are the only process known to be able to navigate huge search spaces. The ID movement attributes magical capabilities to "intelligence" -- the ability to see function in sequences that are completely disconnected. That is simply magic, and I challenge you to demonstrate a non-magical way this can be done. I'll provide a clue: Behe doesn't think it can be done, nor does Douglas Axe. Behe says the designer is god. Axe says there is no shortcut to coding sequences. It really boiles down to this: sequence space can be navigated by some combination of the 47-odd types of genetic change plus the several types of selection, or it cannot. If it cannot, then design by a non-omniscient designer is hoist on the same petard of big numbers. Petrushka
The 10^150 barrier is routinely breached by GA software. It depends on the attributes of the space, not on the size. Petrushka
gpuccio,
"I have just posted the last in my long series about modelling RV + NS."
I've been following your comments on that thread as I'm able, and they're appreciated. I think I'll review from #34 through your latest additions throughout the day.
I find your comments very reasonable and correct. I don’t understand why champignon says they make no sense. Maybe he means they don’t answer his concerns about necessity mechanisms (that is not the same as “not making sense). Anyway, I have answered that point in my post 23.1.2.2.11.
I've found it not uncommon here, that when trying to deal with the probabilistic profile of a sequence in a search space, the objection is raised that we haven't taken into account what evolution can do. However I never see any concrete metrics which allow us to view a single functional sequence as having anything substantially more than a 1/n(S) probability of being found, by any necessity mechanism. So when it's pointed out that a functional sequence of length 120 exists in a search space exceeding 10^150 sequences, instead of acknowledging the problem of probabilistic resources, and discussing which concrete physical mechanism may account for the discrepancy, objections are raised that the powers of evolution have not been considered. But it's those very powers that are in question here, and it's those very powers that we're asking to see demonstrated empirically. A number exceeding 10^150 cannot be traversed by any known blind mechanism. It's not just that doing so is difficult, it is impossible; the number exceeds the number of physical trials available in the history of the universe. Any random mechanic of variation will not be able to search the space, whether or not NS can select for it. NS may be able to reflect a reproductive advantage, but the function that confers it must first be found and actuated within the organism. Never mind that there exists no apparent ontological link between environmental feedback and discrete function; we are lacking the probabilistic resources to conduct the proper number of trials to discover the functions in the first place. So either a search (not a selection) mechanism exists which can find function, or necessity dictates that functional sequence generation (not selection) is inevitable. (Perhaps those are the same thing.) I have to consider that this problem is known, at least to some interlocutors, because the fallback position seems to be, "you need to prove that evolution can't do it." It is suggested that unless we prove evolution of de novo complex specified functionality by incremental selection is impossible, we must consider it likely, or even certain. So we make an inference to design, because a function in a huge sequence space can only be found if it's known to exist, or that it likely exists, or that it possibly exists. Intelligence is the only empirical entity which is known to cull vast sequence spaces in order to effectively reduce search times. And we're told again that the inference is invalid, because we haven't considered the possibility that an undiscovered mechanism can account for it.
Thanks again for your contributions.
You are very gracious. Thank you for patiently and diligently providing the information in the first place. Your posts here elevate the debate, not only by focusing on key issues and cutting through the confusion, but by doing so in an accessible manner, which elevates the knowledge of thoughtful onlookers. material.infantacy
So how do you measure function when it is multifimensional, as when a change diminishes something basic like binding, but improves survival for other reasons? The literature on evolution is full of such cases. Petrushka
lyo:
Yes, that is what the enzyme does. That is its behavior. But how do you get from behavior to function without assuming your conclusion?
It's behaviour is a function Joe
The same as the function of a book that no one reads. Am I winning? I can't tell. :) ScottAndrews2
Although to be fair to gpuccio, he has recently distinguished between what he calls "local function" (which you, and I, would call "behaviour") and function in the phenotype, in terms of reproductive success. But as these conversations get all over the place, it's not in this thread! And I've gotta run, so I can't give the link. Elizabeth Liddle
In that case, what is the function of a thermometer in a landfill? Elizabeth Liddle
Tornadoes are way easier to make by accident. I hope that's not too simple-minded of me. ScottAndrews2
As I have said many times, the ability of an enzyme to accelerate a specific reaction in the lab of at least n times is an objective thing.
Yes, that is what the enzyme does. That is its behavior. But how do you get from behavior to function without assuming your conclusion? Sometimes the behavior of warm moist air is to rotate at high velocity. When this happens, we call that a tornado. Does that mean that the function of the warm moist air is to rotate fast? Before you scoff at the analogy between a tornado and a living entity, tell me what the difference is between the enzyme in your example and the warm air in mine, without assuming your conclusion that life is the result of intelligent design. lastyearon
Perhaps I've just gone and done it again. By contrasting unintended function with maybe-or-maybe-not deliberate function I'm inadvertently referring to design, not function. I think I see what you're saying now. ScottAndrews2
gpuccio, That's what I was saying - just not very well. I was saying that it's reasonable to see a tree across a stream and conclude that it did not have a deliberate function. By contrast, if someone sees function in biology they shouldn't let someone convince them that they're imagining it. ScottAndrews2
ScottAndrews2: I obviously agree with what you say, but I would like to add that a tree falling across a stream is an object for which we can correctly define the function of being a bridge, and compute the relative CSI (not dFSCI, because the tree is not a digital string). It will be very low, because any big object of a certain gross form can be a bridge across a stream. So, the function is not complex at all. But the point is, just defining a function does not imply design. A simple function can emerge randomly. And it does, like in the case of the tree across the stream. It's only functional complexity that allows us to infer design. Function and design are definitely not the same thing. gpuccio
lastyearon: I answer here both 26.2 and 26.3. You are wrong. dFSCI is defined so that any function can be defined for an object, provided that it can be defined objectively and objectively measured. Each measurement of dFSCI will be relative to the defined function. You have misunderstood the meaning of "defined function". It does not mean that the function I define is in some way "special" for that object. For instance, if I define three different functions for an object, each of them has the same "dignity", so to speak, if all have been objectively defined. But each definition will give a different value of dFSCI, because, as I have said many times, dFSCI is the complexity of the function, not of the object. So, "defined function" just means the function according to which this value of dFSCI has been computed. The function is opbjective, even if it is a conscious observer that recognizes and defines it. It is objectively there. It can be measured. As I have said many times, the ability of an enzyme to accelerate a specific reaction in the lab of at least n times is an objective thing. Either it is there, or not. So, I am assuming nothing, and there is nothing circular. To recognize a function we need know nothing of the desiogner or of the process of design. As said, we see an enzyme and we observe what it does in the cell. We repeat that in the lab. We measure it. We even undertand, in many cases, how it does it, what specific structure allows the biochemical reaction to be incredibly accelerated. To understand that the steering wheel allows us to change the direction of a car, we need know nothing of the designer. We just need to try to drive the car. So, you are completely wrong. gpuccio
LYO,
The only way to objectively define a function is if you can match it up with an intended designer who has a specific purpose in mind for the function.
So design is an illusion, and now function is also an illusion? That might work for a tree that falls across a stream forming a bridge. But now you're proposing the absurdity that the behavior of the cells, their components, and the organisms they form only coincidentally appear functional, but are no more so than molecules of water or pieces of rock. Reasoning people should not take this seriously. As a rational thinker, I must conclude that they are functional, even if I have no idea who conceived them or why, or even if their function is only to serve as some bizarre work of art. I can't see any reason to draw such a bizarre conclusion unless eliminating the possibility of design is the primary goal, rather than the result. In fact, it's transparent. I hope that anyone who looks at any of these systems and sees function is not induced to second-guess that well-founded intuition on the flimsy basis that they haven't seen the function documented or explained by its designer. By our power of reason we are the originators and the masters of science. How regrettable it would be if our power of reason became subservient to it. (Sorry, that was a soapbox speech.) ScottAndrews2
The mistake that ID makes is that by analogizing biology with human artifacts it incorrectly applies properties of the latter onto the former. We know the function of parts of machines are to enable the successful operation of the machine, because humans have built those machines to fulfill a specific purpose. For example, we can say that the function of the steering wheel is to enable a human driver to turn the vehicle, because human designers purposefully made that part to do that. We can't say that about any thing other than human designed objects. The only way to objectively define a function is if you can match it up with an intended designer who has a specific purpose in mind for the function. Name a biological system who's function is more than just "to enable the entity that contains it to do what it does". lastyearon
So, what is it [dFSCI] measuring? It is easy: the coplexity in bits necessary to implement the defined function. Whatever that function is.
But how do you define a "function" in such a way that doesn't render your argument circular? A thing or a system may have many functions. Function is in the eye of the beholder. It's completely subjective. By assuming in your premise that the function you observe for an object or system is the "defined function" you are assuming the thing you are allegedly trying to test for lastyearon
For all interested: I have just posted the last in my long series about modelling RV + NS. The thread is always the same: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/comment-page-2/#comment-413684 (posts 34 and following) Any comment is welcome. gpuccio
Petrushka: On another thread, I have given the link to the Wikipedia page about probabilistic distribution, 18000 characters long, as an example of text that certainly has more than 1000 bits of functional complexity, according to my demobstration, and therefore allows a safe design inference. How do you believe that text was written? By evolution? gpuccio
Petrushka,
I would suggest your analogy is not worth much. The ID argument is basedd on the claim that coding sequences are not connected, and are therefore not evolvable.
The context is the generation of functional information. So it's not an analogy. It's a straightforward example. The ID argument is not based on the claim that coding sequences are not connected. It is based on the totality of available evidence, which does not include the connectedness of coding sequences. That would be influenced by adding actual evidence, not by supplementing the lack of it with the presupposition that coding sequences are connected and evolvable.
What’s missing from ID is a demonstration that impossibly long sequences have a syntax that would allow generation by a finite designer, without using evolution.
If you're referring specifically to proteins, then your argument is inexplicable. "It's too complicated to have evolved" is routinely chastised as a simplistic, ignorant argument, even though no demonstration of any kind to the contrary is provided. (It doesn't get enough credit. Based on the evidence it's a rational argument, and discrediting it is what sends us down this rabbit hole.) But it's okay to argue that something is too complicated to have been designed? (Please don't say that it can be designed but only by evolution. A process that does not allow you to target what you wish to design cannot be called design.) And in this case the argument can only be refuted by a specific example? We'll just ignore that the capabilities of intelligent agents to formulate complex designs has increased throughout human history and is accelerating rather than slowing? Apparently this is only a simplistic argument from ignorance when applied against darwinian evolution, but suddenly becomes enlightened logic when applied in favor of it. If you place the two arguments side by side - too complicated for evolution vs. too complicated for design - it's easy to tell that the latter is a willful argument from ignorance and the expectation of ignorance. Behind every such double standard is an arbitrary preference masquerading as science and reason. It's like a judge who sentences people of one race to prison and gives the rest probation for the same offense. In each case he can argue that he's following some legal precedent, but it quickly becomes apparent that he just likes some people more or less than others. That is exactly the sort of double standard you are attempting to pass off. ScottAndrews2
I would suggest your analogy is not worth much. The ID argument is basedd on the claim that coding sequences are not connected, and are therefore not evolvable. There's quite a bit of evidence to the contrary, but in absolute terms it's still being investigated. What's missing from ID is a demonstration that impossibly long sequences have a syntax that would allow generation by a finite designer, without using evolution. Petrushka
Petrushka,
Design without some form of evolution would require omniscience.
I conceived of this post and typed it without any iterative process of trial and error or variation and selection. In fact, this is my very first draft. Am I omniscient? According to you I must be. I think what you mean to say is that it would require intelligence. ScottAndrews2
23.1.2.2.10 This is not fallacious. This is a purely scientific deduction in its own right. What you need is a demonstrable counter-example to falsify it. In the absense of observations suggesting that otherwise is true, we say that empirically the best explanation of such and such things is design. What's wrong with that? You can say it is an argument of "the gaps" if you wish but this argument is scientific routine. Note that explanation quality is judged in the sense of Occam and Bayes: we prefer parsimonious explanations while acknowledging that every new observation we make varies the weights of our initial hypotheses. Care should be taken in order to come up with a complete set of hypotheses. In other words, it is scientifically illegitimate to exclude design as a possible cause, not the other way around. So far, no observations at all are avaliable that would suggest the plausibility of spontaneous emergence of cybernetic control. Eugene S
material_infantacy: I find your comments very reasonable and correct. I don't understand why champignon says they make no sense. Maybe he means they don't answer his concerns about necessity mechanisms (that is not the same as "not making sense). Anyway, I have answered that point in my post 23.1.2.2.11. Thanks again for your contributions. gpuccio
Petrushka: The "simple linear equeztion" you speak of is the regression. The regression explains much of the variance (about 80% of it). That means that about 80% of the variance in dFSCI depends on sequence length. But there is a residual 20% variance that dependes on some other thing: the most reasonable hypothesis is that it depends on the specific structure function relationship in that protein family. So, Durston's data are very useful, not only to support the ID theory, but also as a tool to investigate the strucutre function relationship for different protein functions, and in general the protein functional space. gpuccio
champignon (post 23.1.2.2.10): Let's complete and refine the statements: "1. When we look at known designed and undesigned objects, only the designed objects have high dFSCI. 2. Many biological objects have high dFSCI. 3. No known necessity mechanism can explain their origin, even if coupled to random variation. 4. Therefore, we infer design as the best explanation for them. Yes, that's the argument in a nutshell. It is an empirical inference by analogy. Now, would you please explain why "it’s obviously a fallacious argument". I am curious to understand how I have missed such an obvious conclusion for years. gpuccio
champignon: If you go back to Dembski's explanatory filter, you will remember that two conditions are necessary to infer design: a) The observed object must exhibit CSI b) A necessity explanation must reasonably be ruled out My reasoning is the same. dFSCI must be exhibited to infer design. Abd any necessity explanation, if known, must be taken into account. I have said clearly, in response to you (my post 23.1.2.1.1): "dFSCI is used in my analysis only to evaluate the possibility (or empirical iompossibility) that a certain functional result may have emerged in a random way. The necessity part of the neo darwinian algorithm, NS, is always present in my discussions, but it is evaluated separately. You can look, if you want, to my posts here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/comment-page-2/#comment-413684 (posts 34 and following) I will add a last post about the relationship between positive NS and the probabilistic modelling, as soon as you guys leave me the time! " So, why do you state, in your post 23.1.2.2.1: What? I thought dFSCI was supposed to be a reliable, no-false-positives indicator of design! Yes, it is, provided we apply it only to random transitions, and analyze separately the known necessity mechanisms. So, what is the point? I cannot doubt your intelligence. Are you just distracted when you read my posts? I really invite you to go to the linked thread, if you have the patience to read the detailed material I have posted there. I hope to add soon the final post, that deals explicitly with positive NS and its modelling. gpuccio
Kairos, I think JDNA is asking a more fundamental question. I was hoping that you (GP, MI, etc) might entertain it. Upright BiPed
In fact, looking over some of gpuccio's prior comments, it appears that his argument boils down to this: 1. When we look at known designed and undesigned objects, only the designed objects have high dFSCI. 2. Some biological objects have high dFSCI. 3. Therefore, they are designed. It's obviously a fallacious argument, but that really appears to be what he is saying. champignon
If high dFSCI merely means "unlikely to have come about by pure chance", then it tells us nothing about the probability that the sequence in question evolved. Thus there is the danger of a false positive -- a sequence that has high dFSCI but is not designed. For his argument to be successful, gpuccio needs to show that sequences with high dFSCI cannot evolve. He hasn't done so yet, and dFSCI as it is defined could never do so, because it doesn't take the nature of evolution into account. It is formulated based on the assumption of blind search, which is not at all how evolution works. champignon
Champignon, I thought it was your comment that didn't make sense. gpuccio says,
dFSCI is used in my analysis only to evaluate the possibility (or empirical iompossibility) that a certain functional result may have emerged in a random way.
That is to say, he can evaluate the possibility that a specific functional sequence could have come about randomly. Your reply:
What? I thought dFSCI was supposed to be a reliable, no-false-positives indicator of design!
Which does not follow, because the "possibility of random emergence" precludes a false positive. It can only be false negative or negative -- these comprise a set that is the proper complement of a positive result. P(F') = 1-P(F). There are no false positives in F, and there can be none in F' by definition. If I misunderstood your remark, feel free to clarify. I'll confess to not understanding where you're getting the notion of false positives. material.infantacy
I'm waiting for an example of how a designer would get to. An island of function, assuming there's no incremental path. Assuming the designer isn't God. I think it's pretty easy to see why Behe thinks the designer is God. Design without some form of evolution would require omniscience. Petrushka
I wast just observing that the data plot supplied by gpuccio could be duplicated by a simple linear equation. Petrushka
GP: Of course, the result is comparable to: Chi_xxx = Ip*S - xxx, where xxx is a complexity threshold. KF kairosfocus
material infantacy, Your comment doesn't make sense. Have you been following the discussion closely? champignon
F/N: Please re-read the original post, e.g. it addresses whether S is question-begging etc, on the way the explanatory filter works. KF kairosfocus
P: Nope, the issue is not just length of sequence in digital units (thus exponentially growing scope of the space of possibilities W) but also being confined to a narrow, specific zone that is specifically constrained for the relevant function to be present. And, on "how evo works," the problem with evo as CV + NS --> DWM, is that it starts at the point where we already have a function so that we have differential reproductive success of populations. That is, it starts WITHIN an island of function. The design theory challenge is to get TO the shores of such islands of function, de novo. As has been pointed out over and over and over and over again. Adaptation and optimisation within an island of function is not the problem it is to get to where you can begin that, without intelligent direction to put together a first working model of a functional, composite, complex entity. to make this concrete, try explaining for the initial cell, how we end up with a von Neumann self replicator, that used stored code and is joined to a metabolic automaton, all using C-chemistry informational macromolecules. Then, explain how we get something like the avian lung. All, on observationally based evidence that shows what happened in the real world. There are of course billions of cases in point on how FSCO/I and especially dFSCI, have originated by intelligent action. GEM of TKI kairosfocus
JDFL: Actually, sadly, after much back and forth, I am pretty well sure the "confusion" is intentional, on the part of those who have stirred it. At the very least by willful refusal to do duties of care before commenting adversely. If you compare the explanatory filter, per aspect version, you will easily enough see what is being got at. The DEFAULT ASSUMPTION is that something can be accounted for on chance and/or necessity, i.e is a natural outcome not an artificial one, and the Grand Canyon is easily explained on this. When this objection talking point was first circulated, the example was Mt Pinatubo, and the answer was the very same: S = 0, default, and there is no warrant to move it to 1. A specific warrant -- i.e. case by case -- has to be provided for switching S to 1. Just as, in court the default in anglophone jurisprudence on a criminal law case is not guilty, and a specific warrant beyond reasonable doubt has to be provided to shift that to guilty. That warrant cannot be generically given in advance, but is to be determined on the circumstances to a level that a pool of typical, reasonable and unbiased individuals of ordinary common sense will conclude that this is so beyond reasonable doubt. Just so, there is a jury mechanism in scientific work, that works pretty well wen there is not an ideological bias involved, peer review by members of the circle of the specifically knowledgeable. So, it is not as though there is no framework in which the credibility of assigning S = 1 can be had. And, again, the promoters of objecting talking points KNOW this, or SHOULD know this. That is why I have now lost patience with them, and have concluded that something is rotten in the state of Denmark. In particular, we have the direct test that has been put on the table for the past 5 - 6 years here at UD: drop a controlled noise bomb into the informational item, i.e a moderate and adjustable amount of random variation. If such modest amounts cause function to vanish, we have defined how wide the zone of functional states is. Take an ASCII text string. Underneath the bonnet, this is a string of 1's and 0's. So inject a bit of random walk on it, at a given ratio, say 1%. That is, on a random basis 1 in every 100 bits is flipped. Restore to ASCII, and see what happens [we for the moment, for simplicity, ignore the effects of the parity check bits]. Repeat. The typical English word is about 6 - 7 letters, and random changes that affect words can be seen for effect. We probably can still make out a text string with errors in 1 letter in 7, or about 1 bit in 49. Going up beyond that progressively produces gibberish. Especially if we have a random walk where we have repeated exposure. The reason we can do that is that we are actually very sophisticated information processors, and can exploit all sorts of redundancies and background knowledge. Now, let us have similar text that is source code for an application. That will be a LOT more sensitive to such random changes, as a rule, i.e. computers will do what you tell them to do, not what you intended to tell them to do. Switch to the object code for a program, which is a string of 1's and 0's. That will as a rule be very vulnerable to noise bombing. Now, go to the nodes-arcs type structural model. Noise bomb it -- this is a random walk. there will be some tolerance, depending on where you hit, but usually not much, especially at wiring together level. I once asked guest contributor EP on the effect of doing that to the control setup for a robotics workcell. He was aghast at the likely outcome, for good reason; especially given the power levels and degree of careful co-ordination at work. Similarly, I think you know that for many vehicles, getting a random change to the timing belt, will trigger serious engine damage, in some cases writing the engine off. And so forth. Pretty soon, we have in hand a pool of credible cases of such complex functional specificity, and we have in hand enough to see that certain types of phenomena we observe are credibly FSCI. The text of posts on the Internet or in books etc are the first example we have cited over and over. The code for programs and similar prescriptive "wiring diagram" constrained functional information is a second. This implies the third case, functionally coordinated composite objects that have to be put together in fairly specific ways to work, e.g. a computer or cell phone motherboard. The fault tolerance in such things is very low. An electric motor is a fourth, as is something like a car engine or other irreducibly complex functional object. (Irreducible complexity of core function is a commonplace in our world, just think about how specific and necessary car parts are for something like an engine.) Most relevantly, the von Neumann kinematic self replicator is like that, and such is in fact the heart of how a living cell self-replicates. What the objectors will stoutly resist, is acknowledging that DNA code and folding, functional proteins and protein machines are like that too. That is why they hate the term islands of function, even though these are abundantly obvious. But, that is exactly what Durston et al documented: across the domain of life, it is common for proteins to be quite constrained, sot hat the sequence for certain proteins will not vary all that much, hence the high functional bit information content they reported in the peer reviewed literature. Similarly, Behe's observation that the observed form of evolution typically pivots on slightly varying existing functional forms, and often breaking something that on breaking it will confer some advantage in a particularly stressful environment, is another example. As a Caribbean person, malaria and sickle cell anaemia come to mind as a sad classic in point; I have lost at least one treasured friend to that, while we were in college together. Similarly, observe the way that there has been no good answer form that side to the origin of the bird lung with its one-way flow. The origin of flight of birds and bats is a similar case. And there are others. Consistently, we have just so stories, without detailed, specific observationally anchored warrant for the claim that such systems originated by chance variation and natural selection in real world environments. In short, my conclusion is, the objection is fallacious. But, if fallacies were not persuasive to some and confusing to others, they would not survive. That is why I favour something more direct, like the case of random text generation, which is directly amenable to empirical observation, and we can document the result on infinite monkeys tests through a source known to be speaking against ideological interest on the point. Spaces of 10^50 possibilities have been successfully searched [24-character length functional strings], but those of 10^150 possibilities [72 character length], are a very different story, at 128 times the number of possibilities per additional letter. So, next time the objectors come by, just stand your ground and ask them, can you kindly show us a case where something that is functionally specific as can be shown directly or indirectly, can be shown by observation to have come from blind chance and mechanical necessity without intelligent intervention? (Clue: if there were such credible cases, they would be all over the internet, and the design theory movement would have long since collapsed. Clue 2: when you keep seeing claimed cases that are fallacious -- the latest was the claimed origin of a functional clock that turned out to be typical of GA's, i.e. it started well within an island of function, and moved to implicit targets, all under direction of a designer, who in this case showed off his IDE with code (not realising what he was telling us) -- that too is telling us something.) I trust his helps. GEM of TKI kairosfocus
dgw: Dawkins, if I remember correctly searched for random letters and when the correct letter was found in the correct position, it was retained. I don't think that is correct. As I understand it, the Weasel algorithm generates iterations of populations of individual sentences, each consisting of random letter variants along the string representing "methinks it is like a weasel". Every individual of each population is then evaluated against the target string. Those letters that are closer to the target letter (in the ASCII sequence) at each position score higher, and the individual with the highest score goes on to become the replicator for the next generation. However, an exact letter match does not guarantee success. An individual with several close letter matches can "out-compete" an individual with a single perfect match. I did go through the exercise of coding this and you can actually see that the best-fit sentence will sometimes drift away from the target sentence throughout the sequence of populations. NormO
@gpuccio#12,
The functional complexity is given as 336 functional bits. That means that, according to Durston’s method, that has compared here 1785 different sequences with the same function, the functional space if 697 bits. Therefore, the functional complexity is -log2 of 2^697 / 2^1033, that is 336 functional bits. That is quantitattive, I would say. What is your problem? Of which “inputs” are you discussing? Please, explain.
The 336 bits is perfectly quantitative, and uncontroversially so. It's just irrelevant to function. I think from reading down a little bit, and your invoking Durston, I understand the language mangling that may be going on, here. The reason (OK, one reason of several) 336 bits is perfectly quantitative, but perfectly irrelevant to anything you'd use dFSCI for is that it's just a probability sampling, that's all. It's T target configurations out of a phase space P. That's all well and good as basic division goes, but it's got naught to do with describing, measuring our capturing function. This is the whole reason I presented a random string (remember the 32 char password example I gave) as maximally "functionally complex" on a byte for byte basis. You granted the "functional" designation, and my 32 char randomly generated string is, by definition, maximally complex. Bingo, maximum dFSCI just by generating a random string, calling it "functional", and understanding information theory. But without explicitly rejecting this example (the function was... 'overly wide'), you resisted that example because the functional aspect didn't contribute to "functional complexity". Here, now, it's plain that that is just an arbitrary dismissal on your part. The "functional", the "F" in your dFSCI, means NOTHING, perfectly nothing mathematically in your formula, it is now clear. dFSCI is nothing more base 2 logarithm for a probability (T/P). There's nothing even tangentially related to function in dFSCI. To test this, let's accept that your "ring-breaking enzyme" (RBE) factory AA is 336 bits. So: dFSCI-RBE == 336. Now, let's switch the sequence out with a new AA sequence of the same length, which we randomly select from somewhere, anywhere, we care not where, and understand that it is provisioned with NO KNOWN FUNCTION (NKF). Now we do the same math, and voilà! we get precisely the same result: dFSCI-NKF == 336. The ONLY DIFFERENCE is that I switched out the data bits you are operating on. But as you have it, THE ACTUAL BITS DON'T MATTER. You don't even reference any of the actual bits in the sequence when you calculate dFSCI. That means that dFSCI is mathematically COMPLETELY independent of any function that is related to the content you are evaluating. You don't care what the amino acid sequence actually is for RBE -- it doesn't matter. You could reverse every other codon, completely changing its production characteristics (function), and your metric would neither know or care, mathematically. Elizabeth triggered the light bulb moment with this comment from her, above -- she's smarter than I am in reverse engineering dFSCI:
So would I be right in saying then, that to calculate the dFCSI of a gene you first decide: is it functional? If it is, it scores Function=1, if it isn’t it scores Function=0.
That's frankly outrageous -- dFSCI hardly even rises to the level of 'prank' if this is the essence of dFSCI. I feel like asking for all the time back I wasted in trying to figure your posts out with the expectation that there was an earnest attempt to at least FAIL at or FAKE some engagement with the functional data, AS functional data, rather than simply slapping the arbitrary label "functional" on some string and doing simple division of pulled-out-of-the-ether phase space probabilities. You got me their, gpuccio, I admit. This isn't even a toy, and you had me thinking there was something, something at least wrong or confused there, that took some analysis of data and algorithm as 'functional'. I should have known from the way you responded to the 32 char random password. I'm a chump. Fool me once, I guess. You are right, your calculation is perfectly quantitative, but it's also perfectly vacuous in terms of 'function'. I was looking, over and over for how you might be working in the functional part, but I just missed what you said, admitting it openly as it turned out -- you don't even consider function mathematically. It's just a probability score you assign to things you (and I) deem 'functional', but on grounds TOTALLY UNRELATED to the actual data set you are purportedly analyzing. To sum up, if I say that the AA I offered above ("NKF") is now something I declare to be functional, dFSCI-NKF now instantly really IS 336 bits, and that AA sequence is as dFSCI-rich as dFSCI-RBE. Everything turns on just my waving my hand and saying "this is functional data". The data itself doesn't even enter into the analysis! Which does not mean that I disagree that this AA sequence or that may be functional. I'm sure there are plenty of places we agree. But it's an absolutely debilitating flaw -- a joke of a construct -- to say this is the basis of your metric. If we disagree on what is functional, then what? Moreover, ANY string can be declared functional, at any time, and functions can be marshaled up for any string on demand (that was the point of my random 32 char password example, which was right on the money, after all, I see now). Given that, dFSCI cannot go anywhere at all, not possibly. It's less than useless as a tool for knowledge building and investigation. eigenstate
dFSCI is used in my analysis only to evaluate the possibility (or empirical iompossibility) that a certain functional result may have emerged in a random way.
What? I thought dFSCI was supposed to be a reliable, no-false-positives indicator of design!
The set F is comprised of all functional sequences of sufficient length (no false positives here). The set F' is the complement of F in the sequence space S (false negatives may be here with the true negatives). F ∪ F' = S, and F ∩ F' = {} F' are all sequences that may have come about in a random way. The false negatives are in this set, hence elements in the set "may have come about in a random way." IOW, P(F') = 1 - P(F). If a sequence "may have emerged in a random way" then it is "not definitively designed." material.infantacy
Wouldn't it be simpler to say dFSCI = sequence length times 2.5? Petrushka
dFSCI is used in my analysis only to evaluate the possibility (or empirical iompossibility) that a certain functional result may have emerged in a random way.
What? I thought dFSCI was supposed to be a reliable, no-false-positives indicator of design! champignon
champignon: 1) My idea is indeed that the method probably overestimates the target space. But you are right, we cannot be absolutely sure of the precision. But we need not precision here, just a reasonable approximation of the order of magnitude. I am not saying that the Durston method is our final procedure: it is, at present, the only simple procedure we have, it is credible, reasonable, and it certainly meausres what it says it meausures. As it always happens in empirical science, its precision has to be verified by independent methods. As I have suggested, that can and will be done by means of deeper understanding of the sequence function relationship in proteins and of the topology of protein functional space. You see, the general attitude of darwinists regarding the problem of functional comlexity is to deny it exists, and to denigrateall the serious attempts at solving it scientifically. But that is not a scientific attitude at all. The problem exists, is very importamt, and indeed it is crucila to the neo dariwnian theory. And Durston has greatly contributed to the solution. 2) Good points, but out of context. You have obviously not followed my general reasoning. dFSCI is used in my analysis only to evaluate the possibility (or empirical iompossibility) that a certain functional result may have emerged in a random way. The necessity part of the neo darwinian algorithm, NS, is always present in my discussions, but it is evaluated separately. You can look, if you want, to my posts here: https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/comment-page-2/#comment-413684 (posts 34 and following) I will add a last post about the relationship between positive NS and the probabilistic modelling, as soon as you guys leave me the time! :) So, in no way I am assuming that "evolution works by blind search". I understand that the neo darwinian algorithm works by RV + NS, and I deal with both aspects. But dFSCI is useful only to evaluate the RV part. And it is used by me only for that. Finally, I am not assuming that evolution works for a specific function defined in advance. My reasoning is very different: a) We know that specific functional information emerged at definite times in natural history (in particular, new basic protein domains). b) We can compute dFSCI for each new domain, expressing the probability that that specific function may have emerged in a random way. c) That is not the end of the story. Darwinists will onject (and they do!) that other useful functions could have emerged. That is correct. But I have a couple of arguments about that: c1) First of all, the only functions that are interesting for our discussion are new, naturally selectable protein domains. That's what we are trying to explaing: the emergence of new protein domains. c2) In a specific biological environment, the existing complexity creates huge constraints to what protein domains are "naturally selectable", that is, can by themselves give a reproductive advantage, and therefore be fixed and expand. IC adds to those constraints. The need of a, apporpriate regulatory integration adds further difficulties. c3) Therefore, it can be argued (and I definitely argue) that in each specific scenario, only a few new proteindomains and biochemicla functions would be naturally selectable. c4) However, whatever is their number, the size of their respective target spaces must be only summed to the size of all other selectable possible new domains. Therefore, unless the number of possible, selectable new domains is really huge (and there is ablsolutely no eivdence of that), the "help" deriving from considering all possible selectable new domains, instead of one, will be really small. Please, consider that in the proteome we have "only" 2000 protein superfamilies. Now, let's say, that in a specific moment of natural history, in a specific species, a new protein domain appears. Let's say its functional complexity is 500 bits, because the search space is 700 bits and the target space is 200 bits. Now, let's hypotesize, being really too generous, that in that scenario 1000 other new basic domains, all of them naturally selectable in that context, and with similar functional complexity, could have emerged. Then the whole probability of the target space will be 2^200 * 1000, that is 210 bits. The functional complexity, evaluated for the whole functional space of all 1000 possible domains, will still be 690 bits. So, as you can see, only the existence of huge numbers of possible new functional domains would be of help. But that assumption is against all we know, and in particular: a) The rarity of folding and functional sequences in the search space b) The fact that only 2000 protein superfamilies have been found in billion of years of evolution c) The fact that the emergence of new functional domains has become always more rare with the advancement of evolution, and in more recent times. These are all good empirical indications that basci protein domains are isolated islands in the serach space, and that their number is not very big. gpuccio
By the way, following Peter's advice, I have uploaded my scatterplot of Durston's data about sequence length and functional complexity at imageshack. Here is the link: http://img17.imageshack.us/img17/5649/durston.jpg gpuccio
The Durston method, instead, is a simple and powerfiul method to appoximate the target space of specific protein families, and it is based on the comparison of a grest number of different sequences in the proteome that implement the same function in different species, and a brilliant application to that of the principle of Shannon’s uncertainty.
gpuccio, A couple of comments: 1. If you use Durston's method as a basis for your dFSCI calculation and you want to guarantee no false positives, you have to show among other things that Durston's method doesn't underestimate the size of the target space. How can you demonstrate this? 2. By defining dFSCI in terms of the ratio of the sizes of the target space and the search space, you are in effect assuming that evolution 1) works by blind search 2) for a specific function defined in advance. Neither is true. champignon
eigenstate: Your post # 14. I will be brief (I hope). The wrong thing in the lottery is the statement (shared by you) that "someone has to win the lottery". That os true only if all the lottery tickets have been printed abd bought, and one of their numbers is extracted. That has nothing to do with the situation in biological information. I quote again for my initial comment, because it seems you have not read, or understood, it:
"The example of the lottery is simply stupid (please, don’t take offense). The reason is the following: in a lottery, a certain number of tickets is printed, let’s say 10000, and one of them is extracted. The “probability” of a ticket winning the lottery is 1: it is a necessity relationship. But a protein of, say, 120 AAs, has a search space of 20^120 sequences. To think that all the “tickets” have been printed would be the same as saying that those 20^120 sequences have been really generated, and one of them is selected (wins the lottery). But, as that number is by far greater than the number of atoms in the universe (and of many other things), that means that we have a scenario where 10000 tickets are printed, each with a rnadom numbet between 1 and 20^120, and one random number between 1 and 20^120 is extracted. How probable is then that someone “wins the lottery”?” It seems clear, isn’t it? The lottery example is wrong and stupid."
But, if you want to go on defending it, please be my guest! :) Let's go to more important things. Again, you equivocate on fundamental facts. We observe observe designed objects, and look for specific formal properties in them. We choose dFSCI and verify that it is not exhibited by known non designed objects. All your following comments in the post show that you misunderstand that completely. Please, read carefully what I am going to write: a) We know that the objects we observe in the beginning are designed, because we have direct evidence of that. So we choose them to look for some specific formal property. b) We define dFSCI becasue we believe, because of what we observe, that it can be that property. c) As I have shown before, the computation of dFSCI, and the judgement about its categorical presence (above a certain threshold of complexity) or absence (because no function is observed, or because the functional complexity is too low) is completely independent for knowing that the object is designed. I really can't understand why you insist on such a wrong idea. To compute dFSCI in a digital string, all we need is: c1: Recognize and define a function: I have clearly shown you that that does not imply design, and is done independently of any pre judgement that the string is designed. Non designed strings can be functional (but are never functionally complex). So, you are wrong if you think that, because we define a function, we are already assuming design. That is simply not true. c2) A computation of the search space: very easy, as I have shown. c3) A computation of the target space: difficult, but possible. c4) a simple division, and taking -log2 of the result. That gives us the functional complexity. To decide if that functional complexity is high enough to assign the object to the categorical, binary class of objects exhibiting complex functional information, for which a design inference can be made, we need a threshold of functional complexity, that must be appropriate for the probabilistic resources of the system we are studying. This is a property we can observe in the object, although, as I have already said, it is more a property of the function than of the object. Let's say that the object exhibits a function that is complex. So, all your other comments in the post are not pertinent and wrong. ___________ I hope my b/quote helped. Also, I suggest the issue is that the object exhibits a function that up to a "narrow" zone T in a much wider field of possible configs W, is specific and complex. KF gpuccio
I apologize, I failed to close the quote. Peter's words are only the first two paragraphs in the quote, up to "a very simple one that does alot?" The rest is mine. gpuccio
Elizabeth, eigenstate, and others: As we are in the middle of a hot discussion, I would like to mention here, and answer, an objection made by Peter Griffin about dFSCI on the other thread. I do it hete because I beleiev that it can clarify better what dFSCI really measures (Peter, why don't you join us here? :) ). Peter writes, commenting on my points about the relationship between length and dFSCI:
I do believe you. But that’s the problem. I can write a book with minimal meaning and as long as it’s sufficiently long it’ll apparently be more complex (have a higher value for dFSCI) then a much shorter text which much more inherent meaning. Harry Potter might be a long book but it can’t have more dFSCI then the special theory of relativity, or can it? Or to put it another way, is a very very large protein that does almost nothing more complex (higher dFSCI value) then a very simple one that does a lot?
Now, I want to be very clear about that: dFSCI in no way is measuring the conceptual depth or importance of the function. In that sense, Harry Potter can well be more complex than the special theory of relativity (or at least, of some not too long text about it). And yet, it is not measuring only length. So, what is it measuring? It is easy: the coplexity in bits necessary to implement the defined function. Whatever that function is. It is my opinion that, in dense text, the relationship between length and dFSCI will be rather linear (with some random variability). The same is not true for proteins. Let's look at Durston's data, and in particular to the last column in Table one, the one titled "FSC Density Fits/aa". It gives the mean density of functional information in the protein families he studied. As you can say, the values goe from a minimun of 1.4 for ankyrin to a maximum of 4 for Flu PB2. That is quite a vast range, if we consider that the minimum value for a single aminoacid site is 0 (that is, that site can randomly accommodate any of the 20 aminoacids, and is in no sense constrained by the functional state), and the maximum value is 4.32 (that is, the same aminoacid is always found at that site in the functional set). Now, that does not mean necessarily that a protein with a higher mean value of dFSCI is "more important", or has "more function" than any other. But it does mean that the structure function relationship, in that protein, is more strict, and does not tolerate well random variation. I hope that is clear, and answers Peter's question. ______ Does that help, KF gpuccio
Thanks for looking, GP. I'll have a look at 23.1.2. I've been following along as time allows! material.infantacy
Correction: the limit of -log2(k/20^150) as k approaches 20^150 is zero. The limit of (k/20^150) as k approaches 2^150 is 1, implying a probability of 1 for finding the random function. material.infantacy
KF: Thanks for your contribution. As you know, I am a great fan of "objective subjectivity" :) . gpuccio
material.infantacy: I think what you write is very correct. I have added some other thoughts in my post 23.1.2 gpuccio
Elizabeth: It's strange: I have said these things so many times, even responding to you... However: The search space is easily approximated by taking the length of the sequence and calculating the combinatorial value of the total number of possible sequences of that length. Obviously, shorter or longer sequences may be functional too, but fixing the length to the length of the observed sequence is the best way to simplify the problem. The target space is the big problem. In principle, we could measure the funtion, and assess its presence or absence, for each possible sequence. That is not a good idea in practice, with non trivial search spaces, but it means that the target space exists, and has a definite size. So, we need other, indirect methods to approximate it. In another thread, recently, I have given a mathemathical demonstration that, for texts, the ratio of the target space to the search space, that is the functional complexity, is bound to increse as the length of the meaningful text increses. So I have demonstrated that any meaningful (and dense) text of more than 10000 characters will certainly have a dFSCI higher than 1000 bits, and will allow a design inference. That is a way to face the problem, and my demonstration for a 10000 characters threshold can certainly be lowered. The functional space of proteins is certainly different from the functional space of text. Moreover, while the length of text has virtually no limit, the length of individual proteins, and protein domains, has definite boundaries. Therefore, it is important to have a more precise measure of dFSCI, and indeed it can be done. One way is to go on with research about the sequence structure relationship in specific proteins, an approach followed also by Axe, and try to understand how much function is "robust" to random cahnce. A lot of importnat information can be obtained that way, but much is still to be done in that sense. The Durston method, instead, is a simple and powerfiul method to appoximate the target space of specific protein families, and it is based on the comparison of a grest number of different sequences in the proteome that implement the same function in different species, and a brilliant application to that of the principle of Shannon's uncertainty. The numbers found by Durston are certainly an approximation, but theu are the best simple way we have of measuring the function space of proteins. They do measure it, in reality, although the precision of the measure can certainly be investigated further. An interseting result of Dirston's method is that it confirms, for the space of protein function, what I hyave shown for the space of meaningful texts: that the functional complexity increases as the length of the functional sequence increases. While that is rather intuitive, it's fine to have an empirical confirmation. I have shown that in my discussion with Peter Griffin (by the way, Peter, I am still waiting to know what I should do with your six short phrases :) ). I quote here what I wrote there: "Just some more support for my concept that dFSCI increases with length increase, this time empirical and regarding the protein space. I have taken the data from Durston’s table giving values for 35 protein families, and I have performed a linear regression of dFSCI against sequence length. Here are the results for the regression: Multiple R-squared: 0.8105, Adjusted R-squared: 0.8047 F-statistic: 141.1 on 1 and 33 DF, p-value: 1.836e-13 So, empirically and for the protein space, values of dFSCI are strongly related to sequence length." gpuccio
Thanks :) Elizabeth Liddle
hey KF, on this point: "moving from S = 0, the default value, to S = 1." setting S=1 seems to be where the confusion is. specific identifiers, such as: discrete representation of some thing, protocol, effect/output etc should be all identified within the system before s=1? so the information contained in the grand canyon would be set provisionally to s=0 because protocol and output/function etc cannot be identified? Is this how the value of S is determined? junkdnaforlife
Hi Elizabeth, I'll risk a guess that the target space is the specific functional sequence, expressed as string of amino acids (or their corresponding DNA code string) of length n, and a character set of length 20. The search space is the set of all finite sequences of length n, which figures as 20^n sequences for an n-length string. The target sequence complexity could be expressed in bits as -log2(1/20^n), which is the same for any single sequence in the search space. If multiple sequences could code for the same function, bit complexity could be reduced to -log2(k/20^n) for 1 < k < n, but I'm just fiddling with the numbers. k needs to be really large, or n needs to be relatively small, for k to make any significant impact. However the value of k has implication to other types of functions, such as the random selection of a single sequence from a generous search space (this was mentioned earlier) to be used as a key for encryption. For a space of 10^150, or any computationally non-traversable quantity, practically any sequence chosen at random will serve the purpose, so will have the same function. Therefore k is very close to 10^150 (or equal to it). Such being the case, the limit of the function -log2(k/20^150) as k approaches 20^150 is 1. A random string, although expressing a function as a cryptographic key, in my reasoning, has a very low functional complexity, because the probability of finding the functional sequence by random search is certain, if "functional sequence" is defined as any random sequence of base^length. gpuccio, feel free to correct anything mistaken or unreasonable above. material.infantacy
I think this intuitive understanding on "a lot" is fine as the basis of conjectures. It just isn't good for anything beyond conjecture. It says nothing about the history of an object. Most specifically, it says nothing about the history of coding sequences. It might suggest research, but it is not a substitute for research. It is particularly pernicious when it leads to the conclusion that a mysterious and unseen entity has done something that requires the very power of assembly that you are denying to known and observable processes. I find it almost comical that the putative designer has the power to assemble sequences that have more combinational possibilities than the number of particles in the universe. How is it that the designer has access to the list of functional sequences? Petrushka
Onlookers: Just a footnote for now: subjective and objective are not opposites. To see this, consider what the sort of weak form knowledge we have in science and everyday life is: warranted, credibly true belief. Truth, or correspondence to what actually is here, is objective, and belief is subjective, where credibility is a weighting we exert. Warrant is the rationale that holds all together and gives us good reason to be confident that the belief is true. Now, let us apply that to a metric (system and standard of measurement, which implies scaled comparison with a conventional standard for a quantity; where scales may be ratio, interval, ordinal, and nominal) and the value delivered by it -- and some of this feels a lot like we have been over this ground before almost a year ago with the sock-puppet MG, and it was not listened to that time either. That is why I am simply speaking for record, I frankly do not expect the likes of ES to be interested in more than endlessly and hypnotically drumming out favourite talking points. I do have a faint hope that I could be shown wrong. Scales and standards are of course conventional, and yet may also be objective and warranted. They are deeply involved with subjects who set up conventions, standards, models, units, etc etc. And yet they can be quite objective, speaking to something that is credibly sufficiently accurate to reality that we can rely on it for serious work. Let's focus the metric model that is under attack: Chi_500 = Ip*S - 500, bits beyond the solar system threshold. The bit is a standard unit of information-carrying capacity, i.e the amount in a yes/no decision, a true/false, an on/off etc. String seven together and we can encode the standard symbols of English text. And of course this is what we use to measure capacity of memory etc. 500 bits is enough to have 3.27*10^150 possibilities, which defines a space of possibilities, W. If we have one more bit than that, we have twice the number of possibilities, and two more would give four times, and so on. Objective fact, easily shown mathematically. As an order of magnitude there are about 10^57 atoms in our solar system and there are about 10^45 Planck times per second. Run the solar system for the typical estimate of its age, and its atoms will have had 10^102 Planck time states [where some 98% of the mass of the system is locked up in the sun], where about 10^30 are required for the fastest chemical reactions. That is we are looking at about 1 in 10^48 of the possibilities for just 500 bits. If the atoms are made into the equivalent of monkeys typing at keyboards at random, they are just not going to be able to sample a very large fraction of the possibilities. So, a search across W, will be looking for a needle in a haystack, which is the point of the random text generation exercise shown earlier. Under such circumstances, you are going to sample straw with near certainty, equivalent to taking a one straw sized sample from a haystack 3 1/2 light days across. Even if a solar system lurked within, you would be maximally likely to pick straw. Next step, we take a system in some state E and measure the information used in it, Ip. If it is beyond 500 bits [as we routinely measure or as we may use some more sophisticated metrics deriving from Shannon's H, average info per symbol], it would certainly be complex, but are we looking at straw or needle here? That is where S comes in. There are many cases where E as we have observed -- objective again -- is not particularly constrained, e.g we toss 500 fair coins or the like. But sometimes, E is indeed special, coming from a highly constrained -- specific zone, T. This can be observed, e.g the first 72 ascii characters in this post are constrained by the requisites of a contextually responsive message in English. If they were not so constrained but were blindly picked by our proverbial monkeys typing, by overwhelming likelihood, we would be looking at gibberish:f39iegusvfjebg . . . If E were code for a functional program, that too would be very constraining, e.g. we have that rocket that veered off course for want of a good comma, and had to be self-destructed. Similarly, if E were something that implied information, like the specific way bits and pieces are put together o make a fishing reel, that too is very constraining and separately describable based on a function. Similarly the amino acid sequence to do a job as say an enzyme is typically quite constrained. All of this can be measured by comparing degree of function with the variation of E, and we can in effect map out a narrow zone or island of function T in W. None of this is unfamiliar to anyone who has had to say design and wire up a complicated circuit and get it to work, or who has had to do a machine language program and get the computer to work. There is a threshold of no function, to beginnings of function, and there is a range of changes in which function can vary, sometimes better sometimes not so better. This is a bit vague because we are speaking generically here; once we deal with particular cases, metrics for function, and ways to compare better or worse are going to come out of the technical practice. GP, a physician, is fond of enzyme examples, we can always compare how fast a candidate enzyme makes a reaction go, relative to how it goes undisturbed, and there is a whole statistics of analysis of variance across treatments and blocks that can be deployed. We know this, the objectors running around above -- once ES has confirmed that he has some knowledge of engineering praxis -- know this. So, the objections above are specious, and are known to be specious, or should be known to be specious. For in a great many cases of relevance, we can and routinely do identify functions that depend on complex and specific arrangements of components that are therefore information rich. So, we see that each term on the RHS of the equation is objectively justified. We even can justify on objective grounds, moving from S = 0, the default value, to S = 1. We then can evaluate Chi_500, and if it is positive, then that becomes significant, For that means the observed E's are from a zone T that would be implausible to be hit upon by blind chance plus mechanical necessity. Nor, can we try the idea of well, we can look at difference in degree of function, Gibberish has no relevant function, and 0 - 0 = 0. that is there is no proper hill-climbing signal. [Dawkins; weasel worked by smuggling in a known target and evaluating digital distance to target, rewarding closer in cases. More modern GAs and the like start within an island of function and in effect reward superior function, the issue is 0 - 0 = 0, getting to islands of function in the first instance, So we see the real question begging that has been going on all along. As has been repeatedly pointed out but ignored.) The analytical conclusion is that with high confidence if we see cases E from zones T in such large domains W, we have good reason to infer this is because they were not blindly arrived at, i.e they reflect intelligent design. This is abundantly empirically confirmed, with billions of cases in point. Such as posts in this thread. the problem is the same metric points to design in the living cell and in major body plans, which cuts clean across a dominant school of thought that is locked into the institutionally dominant evolutionary materialist worldview. In short, it is not politically correct. Which, ironically, is a subjective problem. GEM of TKI kairosfocus
Facial recognition is just around the corner. A couple months ago I bought a painting at a garage sale. I thought it was a print of some good artist, but when I looks closely It was on stretched canvas and had paint texture. I looked up the artist and found he is a fairly successful landscape artist represented by galleries in major cities. I still think it might be a sophisticated reproduction. Prints of Thomas Kincaid are being sold for hundreds of dollars. So I started looking on the net for prints of my subject. I couldn't find any. So I took a picture of my painting and plugged it into Google image search. Within seconds, Google returned an image, a snapshot taken at a wedding reception at a hotel. In the background was my painting, my exact frame and all. the image of that painting was out of focus, because it wasn't what was being photographed, but it was unmistakably my painting. I have trouble understanding how the search is done, but I am convinced that recognition software is right here and now, not in the distant future.I've since done other searches on faces, and Google can find images of people. Even from small, fuzzy images. Petrushka
eig: "but drives motor controllers from input sensors in a [Bullet Physics] 3D environment which is running on the CPUs." very nice "I have the code faith" I never doubted lol! junkdnaforlife
And how do you quantify the target space and the search space? Elizabeth Liddle
Elizabeth, Science is done via observations- so yes we would see a protein doing something and then inquire about that protein. If we observe a protein doing nothing we would also want to know why it is there (I would assume). So we see all of this stuff going on- functions being carried out-> function is part of the OBSERVATION. We observe function and meaning. We can measure how well it functions and how many different configurations can perform that function. We can measure what is the minimum to perform that function/ convey that meaning. And we can measure the information based on all of that. Joe
Petrushka, The designer uses a GA to do both. Joe
Elizabeth: Yes, giving function as binary is correct. But I don't compute the number of bits in the sequence. I compute the number of functional bits for the function, that is the ratio of the target space to the search space: that expresses at the same time the probability of getting the function (not the individual sequence) by a random search, and the constraints that the functional state imposes to the sequence. Are we OK on that? gpuccio
You can measure some kinds of function, such as catalytic efficiency, but other functions are elusive. How do you measure the function of height, weight, length of tail feathers? these are known to have both utility (perhaps for attracting a mate) and tradeoffs. But the various versions of FSCI tend to revolve around sequence length, and there is no known way of determining why a certain sequence produces utility, and a neighboring sequence does not. Among other things, this means that there is no theory associating the length of a sequence with utility. It is not clear what the minimum lengths of a useful sequence might be or whether longer sequences are more useful than shorter ones. There is some reason for skepticism about length of codes, since it is not clear that the genomes of onions and amoebas are more functional than shorter genomes. Petrushka
OK, that's fine. As you hadn't given a quantitative definition, I though it might not be. So would I be right in saying then, that to calculate the dFCSI of a gene you first decide: is it functional? If it is, it scores Function=1, if it isn't it scores Function=0. Then you compute number of bits in the sequence in some way, and multiply the answer by the value of Function. And that's dFCSI. Is that more or less it? Elizabeth Liddle
So how do you measure the minimum code length or the minimum protein? That's part of my question about how a designer would work. Can you demonstrate with an example how you would determine, without using evolution, how to build a functional sequence? Petrushka
eigenstate (and Elizabeth): I am really amazed. Either I have become stupid, or I cannot follow your reasonings :) Well, before going on with answers to the previous posts, let's try to clarify the main point here. dFSCI is quantitative. Let's take an example from Durston, again. Let's take betalactamase. The definition of the function is easy. I quote from Wikipedia: "Beta-lactamases are enzymes (EC 3.5.2.6) produced by some bacteria and are responsible for their resistance to beta-lactam antibiotics like penicillins, cephamycins, and carbapenems (ertapenem) (Cephalosporins are relatively resistant to beta-lactamase). These antibiotics have a common element in their molecular structure: a four-atom ring known as a beta-lactam. The lactamase enzyme breaks that ring open, deactivating the molecule's antibacterial properties." The length of the sequence, in Durston's table, is 239 AAs. That means that the ground state (the random state) for that length is 1033 bits. The functional complexity is given as 336 functional bits. That means that, according to Durston's method, that has compared here 1785 different sequences with the same function, the functional space if 697 bits. Therefore, the functional complexity is -log2 of 2^697 / 2^1033, that is 336 functional bits. That is quantitattive, I would say. What is your problem? Of which "inputs" are you discussing? Please, explain. The threshold problem is another point. A threshold is necessary to establish if we will infer design or not. The threshold must take into account the probabilistic resources of the real system one is studying. For a biological system, I have proposed 150 bits as a threshold, according to a computation of the maximal probabilistic resources of our planet in 5 billion years and considering a maximal prokaryote population. Again, the threshold is a methodological choice, and can be discussed. So: that is quantitative. If there is something that is not clear, or if you don't agree, please explain clearly why, and we will avoid discussing uselessly and wasting our reciprocal time. gpuccio
I'm absolutely not saying that function is an illusion. I am saying that it needs to be defined pretty carefully if we are to measure it (or decide whether or not a thing is functional) objectively. I think this is perfectly possible. Elizabeth Liddle
@eingestate:
How will you know if/when you’ve measured the minimum?
It depends- when a roller coaster has a minimum height requirement they usually put up a sign that has the minimum at a point X inches from the ground. So a tape measure would come in handy when setting that up. If there is a minimum weight requirement you would use a scale. True those are mighty complex devices so you should leave all that to experts. Joe
@Joe
We can measure what is the minimum to perform that function/ convey that meaning.
How will you know if/when you've measured the minimum? eigenstate
eigenstate:
Well, that’s tough cookies for FCSI and dFDSCI as metrics then, huh?
Not at all. Ya see science is based on observations. We observe function and meaning. We can measure how well it functions and how many different configurations can perform that function. We can measure what is the minimum to perform that function/ convey that meaning. And we can measure the information based on all of that. Joe
Do I understand this objection correctly? Very loosely stated, by measuring dfsci we call it "functional," which attributes functionality to it without adequately explaining on what basis we call it "functional." Did I get that right? If so, on the one hand it seems like a valid, logical argument. The determination that something is functional is subjective. How does one objectively measure what only exists subjectively? (If I've misunderstood then everything after this is rather pointless. Maybe it is anyway.) The trouble is that this reasoning runs counter to rational thought rather than complementing it. It holds up only if we choose to believe that the only difference between a supposedly functional enzyme and a random assortment of molecules is in our imagination. Or, expanding on it, that the difference between molecules organized to form a living frog and those that form a rock is purely subjective. We can choose to see function or choose not to. I've seen similar reasoning before. It's not inherently illogical, but the result is that we explain what is remarkable by reassessing it and deciding that it's unremarkable. We look for the origin of a function, and then exclude one method because it requires us to subjectively identify the function as such. Reasonably, is not the fact that we expend resources attempting to explain the cause of a given function sufficient cause to label it "function?" It seems logical, but it leads to absurdity. I can understand of one speaks of the illusion of design. But the illusion of function? Isn't that just scraping the bottom of the barrel for objections? ScottAndrews2
@Joe, Well, that's tough cookies for FCSI and dFDSCI as metrics then, huh? I understand you can say "We don't need no steenkin metrics", and that's your prerogative as a position to take, but your observation above discredits those who DO use "function" as a vector in their metrics the offer as a scientific/mathematical means of detecting design. eigenstate
@Elizabeth, It's best if he speaks for himself, but I can say from just our exchange that he DOES implicate an upper probability bound here as a threshold, which makes his metric numeric in nature -- you can't have non-numeric probabilities, as 'numeric' is implied in the term 'probability'. I think he's approaching this from a "heap problem" perspective. If someone asked me how man hairs a man would have to have on his head to NOT be "bald", I'd be hard pressed to come up with a precise number. Perhaps I could offer one, but I'd be challenged to defend why X, and not X-1, or X+1, etc. Even so, I would have no problem saying a man with no hair on his head was "bald", and a man with a thick shock of hair was "not bald". How do make such distinctions if I can't state X? That's the heap problem, as you are likely aware, and it's not a practical problem in that case, "bald" being quasi-quantitative, or qualitative about numbers rather than a discrete numerical metric. I think gpuccio is suggesting that "functional complexity" is something like that, where he can't give you a precise measurement of the quantity, but can only judge there to be "a lot", or "not so much", intuitively or qualitatively. When there's "a lot", the idea is that 'a lot' cannot be achieved without intelligent design. That's my current state of reverse engineering his ideas on this. I'm still getting worthwhile explanations from his point of view as the posts progress, so I'm learning more, post by post. eigenstate
eigenstate, Where you are involved most likely the far side, not the dark side. Function and meaning are things that are observed. Complexity can be described by mathematics, but function and meaning not so and it isn't even necessary. Joe
I have another metaphor for the kind of search. I have a Roomba, a robot vacuum cleaner. It has a severely limited range of behavior: go forward until an object is hit, change direction, go forward again. There a few non-random modes for changing direction, but none of them involve knowing anything about the landscape. Nevertheless, it covers the landscape in a reasonable time. Petrushka
@Joe, Welcome to the dark side, Joe! That's a question for gpuccio. Function is a conceptual cousin of meaning, and while both may be describable quantitatively in principle, in practice we are currently unable. Your objection is the foundation of the critic's rejection of FCSI and dFSCI -- way to go. Critics don't get the leeway to speak candidly in the way you get to, given your privileged status as a pro-ID member here, but occasionally it works out that you make points that fold back on ID, and so your use of "pathetic" gets applied in ways lowly critics couldn't apply. If we can agree that such a request is a non-starter, then we are really getting somewhere. The trouble is, there are ID advocates here who claim that it can be done and is regularly done and in a straightforward matter, nevermind the conspicuous absence of working examples. If you think that's a fool's errand, you should take it up with them. Their apologetics depend on that claim. eigenstate
Math for function? Talk about pathetic requests. is there math for meaning? Joe
"Tactile search" is excellent :) Thanks! For many reasons, not least being the likelihood that the ability to do more than tactile searches (as plants do), but detect things (predators; prey) at at distance (using reflected light, chemical signals, vibrations, aka sight, smell and hearing) drove the evolution of intelligence itself (the possibility of forward modelling, and therefore of intentional behaviour). Evolution is as smart as a plant :) Quite smart, in fact. But not as smart as an animal. Elizabeth Liddle
Qualitative or quantitative? I was saying that he didn't seem to be proposing a quantitative measure. I think kf is, though. I agree re inputs. That seems to me the glaring flaw in Dembski's formula. It's only as good as the assumptions behind the inputs, and those are precisely what are at issue. Elizabeth Liddle
The phrase blind search is not and never has been appropriate. Evolution always tests the edges of what already works. If you need a metaphor, it is more of a tactile search, feeling around. It's true that it doesn't see ahead, but it isn't testing the entire space, just what's in the vicinity .pp Petrushka
@Elizabeth, I think gpuccio would say he IS advancing a qualitative measure, but on whose measurement must be preceded by human recognition to be bootstrapped. His most recent message to me suggests he really only wants to look at dFSCI, for practical reasons ("digital" implies "already quantified" at some level -- bits). That said, I'm in the midst of taking up the "quantativity" of his "functional complexity" concept, and am currently thinking it's either a) not quantitative in an end-to-end problem (can't be resolved against real inputs in a way to produce discrete outputs), or b) it's quantitative, but he's confusing it with simple information entropy (the bitwise complexity of the string). b) is the leading contender as to his view, right now, best I can judge, but as I said we are hashing that out right now. He may provide some good clarity on this. But that is the key problem, the reason why information theorists who've tried (and many have) to develop quantitative models of functionality, or meta-systematization have failed: the mapping of inputs into the model is incomputable presently. Whether that's because it's incomputable in principle, or it's just insuperably complex given the current state of our knowledge and tools isn't clear, but that gnarly problem makes it no surprise that gpuccio or any other IDer here is somewhat thin on providing concrete examples of quantitative analysis of function. As for kf, he won't do any math for his examples driven from the inputs, so that's that. I asked him about doing the math for function in his recent geoglyphs, but don't hold your breath. eigenstate
GP: Again well put. I note you are giving a simple first pass measure, that is going to be good enough for cases where things can be taken as flat random, of course in more complex cases adjustments can be made. the approach you have given is effectively what was put on teh table as a start point maybe 6 years ago. The usual rhetorical game is to complain against a flat random assumption, to which the best answer is, first, that is exactly how we will approach say basic statistical thermodynamics or probability absent any reason to conclude bias. That is why a six on a die is odds 1 in 6. If we have reason to think something is jiggering the odds a bit, then we make an adjustment. For instance, real codes typically will not have flat random symbol distributions, but that is taken up in the shift to H as a measure of average information per symbol. This of course means that less information is passed than a flat random distribution "code" would do. Down t5hat road, we go to say Durston et al, and their table of 35 protein families, based on OBSERVED AA frequencies in aligned sequences. Thus, their FITS -- functional bits -- metric has in hand empirical evidence on distributions. You will see that when I gave three biological examples of the Chi-500 log reduced chi metric, I used Durston's published fits values. It is in that context that I then looked at the 500 bit threshold and of course the families were well beyond it. (Remember, for living forms the issue is really going to be the SET of proteins, and of course again that set is well beyond 500 bits.) My own twist is that we can also do a direct estimation of information carrying capacity, where we have multi-state storage elements, e.g. the 4-state elements in DNA that have a raw info capacity of 2 bits per place, just the actual code throws away some of that through the actual distribution. Not enough to make a difference, of course. Similarly, proteins are 20- state by and large, so 4.32 bits/element raw, but again we see a bit of a takeaway in protein families etc. But, not enough to take away from teh basic point. When it comes to things that do not have obvious string type structures, things get a little more complicated as you point out. For these, we can simply follow what is done in CAD, and go for the nodes and arcs framework that captures the relevant topology. We may also need to specify parts and their orientation at nodes. All of that can be reduced to structured data arrays, and we may then have an equivalwent set of strings, that gives us the implied information. We can then see if and how random perturbation to a modest degree affects performance. Unsurprisingly, we soon see that the pattern of functional specificity to relatively narrow zones T in the set of possibilities W, is a general pattern of things based on integrated function of enough components to run past 500 bits. That starts with simple cases like text in English, it goes on to things like computer programs, it goes on to cases like proteins -- how many slight alterations are at once non-functional! For a fishing reel, a bit of sand can make it grind to a halt so quickly you would be astonished. And you would be amazed to see how sand can seemingly vanish and transport itself then rematerialise where you would think it could never get to: I have come to believe in the near-miraculous powers of sand. Especially if you get caught in a wave. [Van Staals are SEALED like a watch, that's how the boys at Montauk swim out with them in wet suits, to rocks.] In short the case of isolated islands of function in seas of non function is a commonplace reality. That then leads to the next point, on what happens with blind searches of config spaces. Once we are looking at blind samples of very large spaces W, at or beyond 10^150 possibilities, where the available resources are locking us to 1 in 10^48 or worse, we have no right to expect to hit isolated zones T by blind luck. The only empirically known way to go to T's reliably is by intelligence, whether by programming a system or by whatever means conscious intelligences use to do things like compose and write coherent text or invent complex music or whatever. That is what the likes of ES seem ever so desperate to avoid by any means they deem necessary. GEM of TKI kairosfocus
eigenstate. I see you have written much, so I will answer post by post. This is for post 12.2 I think you don't read what I write, or just don't want to understand what is really simple. well, I will state again in brief my definition (and procedure) for dFSCI: a) We observe a digital sequence. b) If we recognize a function for that sequence, we define it and the way to measure it. c) We try to measure the search space Usually approximated to the combinatiorial complexity of a random string of that length in that alphabet); then we try to measure, or approximate, the functional space (the number of sequences of that length that implement the function. d) We divide the numerosity of the functional space by the numerosity of the search space. That is the probability of fincding a functional sequence in the search space by a random search. e) dFSCI in bits is -log2 of the result. We are just expressing that probability as bits of functional complexity. f) We take into account any known algorithm that can generate the sequence in a non random way, and if there is some, we modify the computation including the modifications implied by the necessity algorithm to the random search. You say: So, clearly, there is some quality of “complex function” you are describing which is NOT the combination (or sum) of functionality and complexity. I searched UD a bit to see if I could locate some treatment on this from you, but could not. How can you say that? I have repeated a lot of time that a complex function is simpèly a function that is highly unlikely to emerge in a random way. The fubnctional complexity is the complexity of the function, not of a singular string that expresses the function. It is the -log2 of the ration of all the strings that express the function to all the possible strings. Therefore, in you example of a random string: a) The string is complex b) It expresses a function (possible use for cryptation) c) The function is simple (almost all the strings in the search space express it. It is simple. Why don't you understand it? I will not discuss the geopglyph becasue I prefer to discuss only digital sequences (that's the reason for the "d" in dFSCI). It's simpler. Therefore, the problem of how to measure the bits is already solved, in the way I have shown. Consider your reaction accelerator function. If we take that as our test, “bits” can only make sense as a quantitative measure of the phase space the function is enclosed by. But that phase space is NOT captured by casually “estimating” how many “yes/no” decisions (i.e. bits) are embodied in the function. Bits are not the currency of function. We might use bits as a means of simulating the mechanics of a particular function via finite automata, and then count how many bits we would need to implement a software program that models these mechanics. In that case, though, you are only measuring bits of a software DERIVATIVE, not the functionality itself, measuring the size of a mimimalist program that emulates that function. What do you mean here? A protein, or a protein gene, are digital sequences. Their complexity can be measured in bits. The ration of functional sequences to possible sequences can be calculated, and expressed in bit. It is a measure of the functional complexity, because it measures how much the function constrains variation in the sequence. What is your problem? We need no software, and no derivation or emulation of any kind. As you are aware (I believe) there is no way to deterministically establish a program is the shortest possible program that will produce a given output in a given description language (cf. Kolmogorov ). We can only arrive our “best effort”, which isn’t a “true” measure of the required bits for some function, but a “best guess”, and even then, it’s not actually measuring function, but a software PROXY for some function. We just need to take into account known algorithms, so there is no problem here. If a compression algorithm is known and empirically testable, we take it into account. Otherwise, we don't. This is a common objection, and has no empirical value. The reason that’s so hard is that anything you are likely to name as a function is going to be millions of bits of functional complexity, even for mundane, simple, known-to-be-perfectly-mechanical-non-designed functions. The degrees of freedom that are implicated in just your AA sequence that drives accelerated reactions is going to be ENORMOUS. For instance, if a functional protein is 120 AAs long, the maximal dFSCI for that function (assuming thatv only one sequence implements it) will be -log2 of 1:20^120, that is 518 bits. That is a big number, but perfectly computable. Very big, certainly. Big enough to make a random origin empirically impossible. But easy to compute. And your statement that: "anything you are likely to name as a function is going to be millions of bits of functional complexity, even for mundane, simple, known-to-be-perfectly-mechanical-non-designed functions" is simply false. Please, give examples of digital sequences with functional complexity (as defined) of "millions of bits", or even much less, that are non designed. I am waiting. gpuccio
awesome :) I'm not sure that gpuccio is actually putting forward an a quantitive measure at this point. Have you taken a look at kf's? That seems to involve actual equations! However, I don't know how he is quantifying the inputs. Do you? And, kf, can you explain? I'm particularly interested in P(T|H). That seems to be the key term. Elizabeth Liddle
@junkdnaforlife, Well, it won't be FCSI as used by ID advocates here. That's an acronym pretending to be an applicable numerical concept, a rhetorical device. A human child has a supercomputer that embarrasses the largest supercomputers we've ever built, and surpasses the largest cloud-based clusters we could assemble if we tied all the earth's computers together into one high-speed network computing mesh. 100million + neurons and 100 trillion + synapses, all running in n-to-n parallel topology and in firing in real time (and so densely packed in 3D space that communication times are a tiny fraction of what we would hope to manage with our amalgamated hardware). But our computing power and processing speed advances by the day. We've fallen off Moore's Law some time ago, but the advance is still very rapid. We're still figuring out the wiring and firing mechanics of spiking neural nets and related systems that provide such astonishing performance in recognition by the small child. I have the code faith, although as a guy in my forties, I may not live long enough to see that much growth in the computing power we can marshal, but who knows? Facial recognition doesn't demand the whole of the human brain's resources, and that's something advancing very quickly as our software infrastructure gets better and better in terms of neural network libraries and runtimes. OT: As an aside, I'm working on a first cut of a GPU-powered multi-layer perceptron (my first time developing for a GPU host directly). That's not an idea I came up with (using GPUMlib -- C++ and Cuda), but just initial 'Hello world" tests are amazing in terms of performance vs. a similar perceptron running on CPU (8 cores). The card is a fairly high end if a bit outdated GTX470, but wow, can that thing crunch. You know the parallelism is going to give it a big boost over the 8 cores, but when you see it run, it's quite dramatic. You could power a pretty sophisticated facial recognition program with just that card, I think (maybe it's already been done). My perceptron doesn't do image recognition, but drives motor controllers from input sensors in a [Bullet Physics] 3D environment which is running on the CPUs. eigenstate
This is true, but I'm not at all sure how it supports gpuccio's case. It seems to go the other way to me (and is a point I've made myself before now - inanimate, presumed "unconscious". algorithms are perfectly capable of complex perceptual tasks. Interestingly many of them do it using internal evolutionary algorithms, i.e. learning algorithms, as it is likely our own brains do as well. But don't let me start another derail! What we want is for gpuccio to provide the objective algorithm that detects FCSI, but he seems to be arguing that there isn't one - you need a conscious being to do it. Correct me if I'm wrong gpuccio. Elizabeth Liddle
F/N: Of Ribosomes, process sequence charts, spinning reels and the objective reality of FSCO/I, here. In Poker they would say: read and weep. EP, thank you ever so much. Timely! (And, oh yes, once we see string data structures, we can build up more complex arrays by use of pointers etc. This is how a PC -- memory is a stack of 8-bit strings as a rule -- works, and for instance, a CAD package file presents the "wiring diagram" structure of a system by doing just that. Each bit in such a system is a structurally organised Y/N question and its answer. The text in this post is a case in point too, each letter or alphanumeric glyph being seven Y/N answers, with another one used to parity check. If you want to see what that would imply for a three dimensional functionally specific and complex object, think about reducing the Abu Cardinal exploded view, Fig 6 the linked, to a more abstract nodes-arcs view, where the part number and orientation of each part are coded [think about how assembly line machines would have to be arranged to effect this on the ground!], and there is a data structure that codes the nods and arcs. The assembly process for the Cardinal will be similar to the process shown in fig 4, which is an assembly diagram for the protein chain assembly process, done by an industrial robotics practitioner. This is yet another level of nodes-arcs diagram, and the text in its nodes would be further answers to Y/N q's at 7 bits per character, constituting prescriptive coded information. These all run past 500 - 1,000 bits so fast that it is ridiculous. I wonder, can we still get something like the old Heathkits with their wonderful assembly manuals? maybe, that will help people to begin to understand what we are dealing with, those manuals were usually dozens of pages long, and richly illustrated. I still remember the joy of assembling my very own ET 3400 A [I modded the design by bedding the crystal in bluetac, cushioning it], a classic educational SBC which sits in one of our cupboards here 25 years later; a pity the volcanic fumes damaged the hardware while in storage.) KF kairosfocus
@kf, Can you supply just one list of the yes/no questions you refer to, here? I understand bits as a binary state (e.g yes|no), but do not understand binary states or bits to be the units of functionality, or the complexity of functionality. Here again is an opportunity for some actual delivery on the applied concepts you talk about would go far, where your abstract references and generalities don't get you anywhere at all. How do you go about specifying a system? Take a brutally simple system, no need for more-than-universal-atoms, just a system which you think has a complexity of a handful of bits. That wouldn't take long (if you can do such at all), and would be EXTREMELY educational on this topic. eigenstate
@gpuccio#12, I'm out of time for tonight in walking through your posts #10 and 12, but will proceed as I have time tomorrow and beyond. In re-reading your #12, though, I wanted to pull this out and comment on it before I go for now: I said:
If my goal is to secure access to my account and my password is limited to 32 characters, a random string I generate for the full 32 characters is the most the most efficient design possible.
To which you replied:
Yes, and any random string of 32 characters will do. Function, but not complex function.
So, clearly, there is some quality of "complex function" you are describing which is NOT the combination (or sum) of functionality and complexity. I searched UD a bit to see if I could locate some treatment on this from you, but could not. In any case, if you grant that my 32 char string is functional for our purposes, and you have, I was offering a random sequence of characters as the content of that string to give it maximum complexity. That is, maximum complexity per information metrics (I, H). I chose this specifically to "peg the needle" in terms of being "functional" and "maximally complex". But clearly, "bits" as an information theory metric is not what you are going for, even though you (curiously) say the answer is rendered in "bits" as your units. If you want to measure "functional complexity" you are either a) confused in responding to me as you have, as a random string has maximal complexity as the substance for the function, or b) you are confused in thinking that "bits" is a quantitative measure of functionality. Given what you've said above, now (repeatedly), it cannot be a), so you must be objecting on the basis of b). And this explains why you are neither able to quantify functionality, nor interested in doing so. It's not a matter of bits for you. A good way to assess this problem, as I keep pressing for, is to actually try to APPLY the concepts in a quantitative, rule based way, and test it. Consider your reaction accelerator function. If we take that as our test, "bits" can only make sense as a quantitative measure of the phase space the function is enclosed by. But that phase space is NOT captured by casually "estimating" how many "yes/no" decisions (i.e. bits) are embodied in the function. Bits are not the currency of function. We might use bits as a means of simulating the mechanics of a particular function via finite automata, and then count how many bits we would need to implement a software program that models these mechanics. In that case, though, you are only measuring bits of a software DERIVATIVE, not the functionality itself, measuring the size of a mimimalist program that emulates that function. As you are aware (I believe) there is no way to deterministically establish a program is the shortest possible program that will produce a given output in a given description language (cf. Kolmogorov ). We can only arrive our "best effort", which isn't a "true" measure of the required bits for some function, but a "best guess", and even then, it's not actually measuring function, but a software PROXY for some function. Without that in mind, you have case where, for example, KF supposes that the roundness of a geoglyph (an OCP per the terms above) surely comprises more than 300 bits of functional complexity, which is just an absurd framework for assessing that phenomenon. What are those "yes/no" questions that we have 300+ of in the case of the geoglyph? Oh, right, they aren't yes/no questions? That's why I asked! What do those "bits" represent, then? As far as I can tell, nothing. I'd be happy to have KF explain where and how he allocates those bits, but the point here is that thinking about "bits" as a measure of function in the SAME WAY ONE THINKS ABOUT INFO THEORY "bits" is a fail, unless you want to stipulate that functionality (er, functional complexity) is synonymous with information entropy. But that can't be, as you've shown, else a 32 character random string for a password would be "maximally functionally complex". So it's something else, something that's not been defined, or even engaged as far as I can tell on this forum. If you hired me as your technical consultant, I'd first ask for a big raise as hazard pay on this topic, then I'd pursue something like the above. Functional complexity as the number of bits required for a computer program that serves as a proxy for the actual phenomenon. That's still a way to get nowhere, probably, and likely laughed as we do, but it would AT LEAST be a heuristic that could provide some semantics for the metric, some grounding for what "bits" might measure in a serious approach to the concept. The reason that's so hard is that anything you are likely to name as a function is going to be millions of bits of functional complexity, even for mundane, simple, known-to-be-perfectly-mechanical-non-designed functions. The degrees of freedom that are implicated in just your AA sequence that drives accelerated reactions is going to be ENORMOUS. That's not a boon for ID, that's a problem. It reflects the information-is-physical/physical-is-information aspects of physics, which puts ANY functional specification WAY over the UPB, even for the most basic, banal function you could point to. eigenstate
eigenstate: "It can only be accomplished by the “magic” of conscious beings." What about facial recognition algos vs human recognition? facial rec algos fail miserably (swamping false positives) vs human. a human child can easily target the match within seconds with very few false positives. eventually we will get there with facial rec algos, so I wouldn't curb stomp fcsi based on the current reliance on human consciousness to identify it just yet, have some code faith junkdnaforlife
F/N 2: It has apparently not soaked into ES, that the FSCI threshold is not a hard barrier, it is a matter of the implications of ever increasing scope of blind search. Beyond a certain point a challenge becomes insuperable, and it is most definitely not "question-begging" or the like to point out that the only known and routinely observed way to get past that hump is intelligence. In short, we have here a massively supported observation, and a needle in a haystack analysis that points to why it is so. To put it another way around, it is always possible to have "lucky noise" making a pile of rocks falling down a hillside come up trumps reading Welcome to Wales or the like. But, given the isolation of such a functional and contextually relevant config in the space of possibilities for rockfalls, if you see rocks spelling out "Welcome to Wales" on a hillside on the Welsh border, you are fully warranted to infer to intelligent design. For reasons that are not too hard to figure out, if one is willing to accept the POSSIBILITY of intelligence at the given place and time. (And we suddenly see the real rhetorical significance of all of that snide dismissal of "the supernatural," even when we see that from the very first days of the design movement in 1984, with Thaxton et al, it was highlighted that the empirical evidence for biology warrants inference to intelligence, not to identifying who the intelligence is, and/or whether it is within or beyond the cosmos. There is a side to the design inference that does point beyond our cosmos, cosmological, but we can observe a very studied, quiet tip-toeing by Hole's graveyard on that. Sir Fred's duppy, leaning on the fence and shaking his head, says: BOO! EEEEEEEEEEEEEEK! . . . ) KF kairosfocus
F/N: Lotteries are designed to be winnable within the likely population of purchasers; i.e. without winners no-one would think they have a chance (even in the power-ball case where the idea is that the "growing un-won prize" will pull in more and more of the gullible). In short, anyone who makes a lottery that is the equivalent of a needle in a haystack search is incompetent, and we had better believe that the designers of such are anything but incompetent, this is a case of the dictum in finance, that if you are naive or thoughtless about ANYTHING, there will be someone out there more than willing to take your money. The problem GP identified with a search resources gap is real. And the proper comparison is not lotteries -- designed to be of winnable scope of search -- but needles in haystacks, or for code-bearing strings as we see with dNA or as is implied by protein AA chains, monkeys at keyboards. (This is a case of pointing out a real disanalogy, and a more correct analogy or two! Analogies that as it happens to be, include one that was actually developed in the context of scientific analysis where this sort of probability threshold challenge first emerged: statistical mechanics, as Borel reminds us.) The key issue is that once we have a string of 500 or more yes/no questions to specify the config of particular components to get something to work [relevant and obvious for string data structures such as DNA and AA chains, implied for node and arc breakdowns of complex organised objects like ATP synthase or the flagellum or kinesin walking trucks on the microtubule highways of the cell], we have swamped the atomic resources of our solar system -- our effective universe for chemical level interaction. Under those needle in haystack conditions, a small scope of blind search -- 1 in 10^48 or less -- is going to be so small that the only reasonable expected "find" will be straw, the bulk of the space of possibilities. That is, non-functional configs. Again, the only empirically warranted means of getting to FSCI is intelligent design. We are -- on a priori materialism -- being invited to ignore that fact, and discard basic inductive inference, then resort to speculating on how it "must" have been in the past against all odds. But the odds are trying to tell us something. Namely, if you see complex digital code that effects algorithms and data structures for complex functional processes, or complex assemblies that make motors and other machines, then just what is it that in your experience best explains it. Then too, on the scope of the observed cosmos, what sort of causal process is a credible explanation of what you are seeing, why? I am also beginning to think that we are dealing with a generation that is too hands off of technologies to have it in the bones as to just how complex and specific even something as mundane as a bolt and nut or a gear are, much less something like a computer or cell phone motherboard, or a motor. So, we see a willingness to allow simplistic simulations to mislead us -- I think here especially about that "voila poof" case of a clock self assembling itself our of gears that just happened to mesh right via a genetic algorithm. I don't think that the creators of such understood just how hard it is to centre a shaft and axle, or how gears have to be keyed or otherwise locked to the shaft, or just how complex gearing is, and what it takes to get gearing that meshes right, across a gear chain. Or the effects of thermal expansivity, or a lot of other things. There is a reason why electrical, electronic and mechanical designers and programmers are paid what they are paid. Just because, thanks to the magic of mass manufacture we can get their products cheaply, may be breeding that familiarity that leads to contempt. And, don't let me get into code interweaving, which is known to be the case in DNA. I never even bothered to try that, giving most fervent thanks instead that by my time we had cheap 2716 EPROMs that held all of 2 k bytes, and that we could use 74LS245 8-bit bidirectional bus interfaces and 2 k byte RAMs to build emulators! (Do you know how much control can fit into 1/2 - 1 k byte, once you are doing machine language programming? But, to do as much in 128 bytes, i.e. 1 k BIT, is a very different story! BTW, this is one reason why I think that the Cambridge initiative to put the Raspberry Pi out there as a US$ 25 - 35 PC on a credit card sized chip is so important, it re-opens the world of hands on electronics tinkering at a new level. That's why I am feeling my way to a new generation of intro tech courses, that build on PC hosts and target systems that interface to the real world. The mechatronics age is in front of us, but too often we are blind to it; not to mention, its implications.) So, I think the best advice I can give is, go get an el cheapo spinning reel or the like, and take it apart, seeing how it works, then put it together again. Think about what a worm gear or helically cut gearing are going to require, and what has to go into specifying the integrated components. And in case you wonder about the wobble of such a reel, then know what you are paying for when you go for reels that cost ten times as much.) KF kairosfocus
That’s it. That’s exactly my point. “Function”, as you say, “defies objective formalization”. It’s perfectly true. Function is related to the conscious experience of puprose, and all conscious experience in essence “defy objective formalization”. That’s exactly the point I have discussed many times with Elizabeth, and that she vehemently denies.
Well, for the record, I vehemently deny it, too. I'm first just trying to understand your position, and ramifications of that position. So your saying "That's exactly my point" reflects a level of understand on my part about your position, which is great. But I see no reason at all why "function" would be impossible in principle to formalize and systematize in quantitative terms. If nothing else, such a position on my part (and thus I suggest on your part) would produce the response "function: ur doin it rong", perhaps with a lolcat peering at a flowchart as an attending image. I'm not sure how else to characterize "defies objective formalization" on your part except to understand it as some reflection of a romantic superstition. It doesn't matter how I understand its internals though, so much as it matters that I understand why you refuse to provide any formalization, or even quantitative rules fo what you refer to as a "metric". I get that now, thanks. I haven't read that exchange with Elizabeth, but maybe I'll profit from going back to find and read that.
And so? What shall we do? We build our science suppressing the concept of function, because it “defies objective formalization”?
Of course not. But neither should we be satisfied with defeatist superstitions and magical thinking. If you can describe something, rigorously, formally, you can "demystify it", and systematize it, and remove if from the realm of credulous intuitions about our own consciousness. Information theory -- the vanilla kind -- is a good example. We came quite close to the present day, historically, without a numerical model for channel-based information. It wasn't until Thomson developed a model for statistical mechanics (and this was in the late 19th century) that we had maths for information theory (Shannon discovered the same model in a different context several decades later). Before that, we thought we might "recognize" information, us magical humans, but we were ignorant in that respect; we did not understand the mechanics of entropy, disclosure and statistical analysis of symbol sets. The answer wasn't to resign ourselves to "recognizing information when we consciously recognize it", or to agree by consensus that some symbol set had "a lot" of information, and some other set "seems to have less". It was a hard problem, but it's one we've made progress on, and have systematized and formalized in ways that do NOT rely on human recognition. We can quantify Shannon's H with a computer program now, easily. As a negative example, by the way, see Werner Gitt's hapless attempts to define the "five levels of information", including "apobetics". As goofy as that guy's ideas are, though, they are at least laudable as an attempt at systemizing information in a way that captures intent, meaning and purpose, if a failed one. FSCI, insofar as it retreats from the challenge of systematizing and formalizing its concepts in quantitative terms, then, has a very bleak future. It's use is just an informal rhetorical tool, applicable for casual debates in settings like this one, so far as I can see. For my part, I don't suppose 'function' is magic, or intractable in principle. It's a hard, complex problem, and one we aren't ready to tackle directly, and must build the knowledge infrastructure for first, before we can expect major headway to be made. Similar to the way the infrastructure had to be developed over decades that enabled us to measure, watch, and manipulate neurons and synapses and related machinery in the human brain as a predicate for making headway on the hard problems of cognition. I know you didn't develop FSCI for me, or any critic, and are not the least disturbed by the critics' shrug, but as I understand it in light of what you've said here, FSCI is hard pressed to command more than a shrug -- meh. It's not an interesting tool that can take us anywhere on the key questions, scientifically.
We build our science suppressing the concept of “meanign”, because it “defies objective formalization”?
Science has an epistemology to protect and preserve, else it has no knowledge to offer at all. So, it's not a matter of suppression as *distaste* for the subject, but the demand that anything integrated into the epistemology doesn't NUKE that epistemology, as this would. A divine fit in the door, so to speak, threatening all the models that integrate it or acknowledge it. "meaning" is suppressed conceptually in precisely the same sense "information metrics" were suppressed conceptually prior to Boltzmann, Thomson, and Shannon and Chaitin, et al. It wasn't coherent enough to be integrated into scientific models, prior to that, so it's non grata for the very best of reasons. If "meaning" or "function" is similarly inchoate, and it is at this point in time, if we care about the integrity of our knowledge and models, it is similarly something we talk about, but shun and eschew as an ingredient in our science. It's still voodoo. It may not always be thus, and given the march of science, I expect it to be "demystified" at some point in the future in such a way that it can be actually implemented as a set of operating metrics for our models.
We build our sceince suppressing the concept of “objective formalization”, because it “defies objective formalization”? How do you define “formalization” without the concepts of meaning, of cause and effect, and many others, that require a cosncious being to be understood? Absolute nonsense!
Well, developing a formalized model of consciousness would be a very elegant means of doing that. I won't say that's the only means to do it, but I think the two are tightly related as physical phenomenon; to understand 'meaning' and 'function' in the anthropomorphic sense, is to understand consciousness to a significant depth, and vice versa. That's the great thing about eschewing those superstitions, though: the problems are hard, daunting, but there is no temptation to cop-out and satisfy oneself with lazy retreats toward appeals to a cosmic designer or a "supernatural mind", etc. It's all natural phenomena, exquisitely challenging natural phenomena, to model and reverse engineer.
eigenstate
@gpuccio#10
This kind of reasoning is absolute nonsense, and it is really sad for me that intelligent persons like you and Elizabet go on defending nonsense. However, sad or not sad, I go on. The role of the conscious observer is very simple: there is a protein that has an objective function, but that function can only be described by a cosncious observer. Why? Because only conscious observers recognize and define function. An algorithm can perfectly recognize some specific function, aftfer a conscious observer has programmed, dorectly or indirectly, the properties of that particular function in the algorithm. But not before.
OK, this is good because it teases out a further aspect of the problem, here. Let's just stipulate for the moment that only conscious observers recognize 'function'. That's highly problematic in itself in my view, as if you can understand what enables recognition you don't necessarily need a conscious being to do that, unless you also want to stipulate that a non-biological machine which has trained neural nets (or some other mechanism) to recognize 'function' as well. But provisionally, let's understand that's the case: conscious humans are needed to recognize function initially. You allow (at least!) that we may define some of those functions in a rigorous, formal way such that we can implement algorithms that perform those recognitions, *after* they have been pioneered by conscious humans. Very well. If that's true (as we are stipulating), then necessarily there is no "meta-recognition" for functions, which is to say there is no way to reduce function recognition to an algorithm (or maybe it's better to call it 'automata' there, since in practice such recognitions would not be algorithmic in the strict sense, but rather machine-learned, through neural nets and the like). If there is no algorithm for meta-recognition of functions, in the way humans can meta-recognize, there necessarily CANNOT BE A WAY to apply an objective test for function. It can only be accomplished by the "magic" of conscious beings. That's problematic on its face, scientifically (how much science do you know that operates on "can't define it quantitively but I know it when I see it, trust me!"?), but the real kicker is that if all the above is true, "designedness" is not a discrete physical property that can be derived in an objective, systematic way. I am taking care to use "objective, systematic", in order to distinguish from just "objective" being taken to mean "not dependent on [subjective] mind or will". "Objective, systematic" is used to point to a true metric, a numerically-based, measurable and quantifiable set of attributes that inhere in an OCP itself. That's damning for ID, if it's true. The whole enterprise is based on the conjecture that "designedness" obtains in objectively determinable ways, no? And particularly as it pertains to OCPs? If yes, then your conditions make that impossible. ID can garner objective consensus on what objects it has observed empirically to be designed, but it can't weigh in on OCPs in any objective, systematic way. It can only hope that humans agree to "recognize" OCPs as intelligently designed or not to prevent further controversy. But so long as there is not unanimous recognition, ID is hopelessly impotent on these terms. It's no more than an "Is too, is not" back and forth fiasco, then. And that's a great and valuable feature of science, which ID foregoes in this case. "Let's let the numbers decide", and "the scores from the competing predictions should settle this one for us" remove "human recognition" to simply evaluating the scores and results, and this is awesomeness. If FSCI is fundamentally hitched to "I know it when I see it, based on what other things I've seen designed", ID has very humble ambitions, and prospects, indeed. But maybe you do suppose that recognition can be systematized, and functionality quantified? If so, then it's wide open in terms of what ID might hope to accomplish. eigenstate
The point is, it is not important how we define dFSCI: the important thing is that it works. There are conceptual reasons why we define dFSCI as we do, but in the end they would beof no value it dFSCI did not work. It does work, and we can verify that on all the onbjects for which we have a reasonable certainty that they were designed or not. So, again, it is an empirical procedure. And it works.
How dFSCI is developed and determined is crucial. Being empirically consistent in terms of just "tagging" objects as designed does NOT, in itself, confer an epistemic basis for rendering judgment on objects of controversial provenance (like your example of biological information). If you can develop a metric which measures something *intrinsic* in the object, that is epistemologically valuable in assessing objects of controversial provenance (OCPs, here forward) in a way empirical tagging with "designed" on various object we can see being designed is not. That's because if your FSCI is intrinsic to the object, OCPs are quite possibly amenable to detection or measurement of this metric, and therefore controversial provenances might be adjudicated, and intelligently designed objects might be argued for reasonably where this intrinsic metric is determined. (Note that this means that for a given object, the provenance of the configuration and genesis of the object is unknown, and your FSCI -- whatever this other metric would be called, because it's not what you are describing -- would be determined just from examination and testing of the subject, without access to a design/non-design view of how it got that way). But as it is (correct me if I'm wrong), it's just an external assessment -- something we might keep in a list of "designed objects". Epistemically, when you are presented with an OCP, FSCI can't help you, as the list has no information about its provenance. The "metric" (and 'metric' here is really a misnomer as this becomes more clear) does not obtain from the object itself, but only from observing its provenance.
I am happy that the reaction acceleration is not a problem for you. So, we need conscious recognition to find something that is an objective physical process. Why? It is very simple. All science is based on consscious recognition of objective physical processes. Who do you think recognized gravity, and found the laws that explained what he had recognized? An algorithmic process? Have you ever heard of Newton?
No, it's not. When I write software (and while I've done this with genetic algorithms in this field, I'm not talking about that in this case, but "hand-designed" heuristics we code deliberately) that analyzes and detects network intrusion patterns, I define the criteria, and a huge set of machines monitors large streams of traffic in doing its analysis. That's not to say that humans aren't involved in the process -- I and my engineers wrote the code that makes this automation happen, but the surveying, classification, identification and testing all happens by machine. It's just very complicated rule sets being applied and Bayesian and other statistical matchings applied to the results to produce new knowledge, new empirically derived and tested information. And while it's true that humans engineer the process, science is ever working to find ways to suppress subjectivity and bias in its models, which makes instrumentation and automation a powerful asset -- machines and algorithms being the extension of human inquiries doing lots of legwork, and doing it as 'dumb machines', meaning they operate as they are designed to operate and do not color their operations with their religious beliefs of worldviews. Even so, we depend on recognizing processes, or more precisely observing processes that we connect with others that match in our experience. This is a limiting factor, though, for the very reason you struggle with applying FSCI -- it's not amenable to formalization. That's OK -- the chain has to end somewhere, else infinite regress -- but it's a "bug" not a "feature" to say "we depend on just recognizing". That's what we do because we must and have no other way. Where we can develop and apply formal, objective rules and algorithms as part of discrete model, a model that doesn't depend on humans "recognizing" except at the top level where we observe to performance of the model itself, we can "scale" as we would say in the software world. The model is robust, then, and dependable because it DOESN'T rely on the weakness of human recognition, and all the error and imprecision that goes with that. "Scale" is important there, because the problem is not just error and imprecision. If you know Shannon or Kolmogorov information metrics, you will understand that these metrics aren't just hugely valuable because they aren't subject to human recognition errors. They are valuable because without human recognition, the metrics are completely general, and the metric scales across domains elegantly. All you need is symbol streams and statistical ensembles to evaluate and it all works. (d)FSCI isn't just lacking because it's error prone and human bound (it thwarts robust, discrete models), but because it cannot be generalized and scaled. One can't even measure a contrived "textbook example", let alone point at any desired object of interest for algorithmic inspection. More in a bit. This is interesting and substantive stuff to critique, thanks. eigenstate
@gpuccio,
Now, that’s obviously not fair. You are evading my comment. You are not discussing at all the lottery example, and shifting to the deck of cards. But my comment on the deck of cards were not those you quoted. So, please, answer my comment on the lottery example, ot just admit that uou were wring in making that argument. That would be the only fair behaviour possible.
A lottery is no different than a deck of cards - it's the same example! I don't know how many numbers one must pick for "Powerball", the big lottery in my area, as I don't play the lottery (Blackjack is even too steep in terms of odds against in my view!), but it's several numbers, like 6 separate numbers between 1 and 99. All a card deck is a number between 1 and 52 times 104 instances (without duplicates, which I believe distinguishes it from Powerball). If you want to talk about a lotter with a phase space for the tickets of 10^166 rather than a deck of 104 cards, I'm just with that. We are just dealing with segmented ensembles from a large phase space. Cards or tickets, the underlying maths and concepts don't change. In any case, I'm happy to stick with lottery tickets, if you prefer.
I would say this is your main argument. Probably the only argument. And it is completely wrong. First of all, you (and Elizabeth) are still not understanding my definition of dFSCI and my procedure. You still equivocate its nature, its purpose, and its power. dFSCI is an empirical concept. The resoning goes this way, in order
I think that's right, and said as much in that post -- that's the key point of objection to (d)FSCI. Thanks for recounting it again, let's see how this goes:
a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence.
Right. Got that, and saw (and see) that that is problematic in its own right. It's not a problem to grant what we observe directly to be designed by intelligence, it's that this is 99.9% of the substance of the question. And it's decided empirically, not algorithmically, and that's important, because it means you have observations of design, but have not captured or understood what, if anything, remains in the design(ed) that unamibiguously signals intelligent design.
b) We define dFSCI as such a property.
I don't see this as a "property" at all. It's an observed instance of design. "Property" is precisely what you DO NOT HAVE. You have things that are known to be designed, but you know that because you actually observed the design. You HAVE NOT identified any property intrinsic to the design(ed) that obtains independent of simply seeing the design process at work. If you had an actual property, you could look for it, and mechanically determine, without observing any contributing designer or design process, whether it was designed, predictably. I think this is the core of your confusion, conflating a property of the designed thing with your knowledge (external to the actual designed thing) of the design process that produced it. These are not the same thing, and are profoundly different aspects of the phenomena in question. A designed thing may not bear any features of design in the designed thing, AS PART OF THE DESIGN, for example. Even when there is no such goal in the design, you have not got a property of the object defined, here, but have instead put it into a list of things you know are designed. You can not tell us, obectively, what makes it designed APART from your observations of design.
c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives.
But so far, you don't have anything that is intrinsic to the object itself. You only have "was produced by a designer". That is NOT an equivalent pair of statements: a) This was produced by a designer, and b) this thing has the property of designedness. You appear to have confused a) and b) thoroughly. This is a HUGE problem for your argument if that's the case.
d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information).
That doesn't work at all. As you have it here (and I understand you may need to clarify or expand it more), it is utterly useless and impotent for that purpose. Once you don't know beforehand the provenance of the object of phenomenon you are looking at, (d)FSCI is of absolutely no value to you, or me, or anyone. It's probably worth less than nothing, because it apparently inclines the bearer to a false sense of knowledge, based on the confusion discussed above. Gotta get on an airplane, will continue anon, thanks. eigenstate
Onlookers: Kindly, look at the sketch of a D'Arsonval galvanometer movement in the original post. think about how the components must be specifically selected and configured, it this device is to have a proportional response tot he current in it, being a specialised very low power motor restrained by a very special spiral spring. Think about a bag of the parts for such a meter, and shake it up. How long do you think you would have to shake before you would get a working instrument? Now, convert the 10^57 atoms of our solar system into bags with parts like that. Shake them up for 10^17s will you have reason to believe any one of them will form a working galvanometer? Now, consider the self replicating facility required for a cell to work, in light of the requisites of a von Neumann Kinematic self-replicator, which is what the cell implements. Notice, this is far more complex than a galvanometer. Do you see why, conveniently, origin of life is locked out of the discussions by the darwinists? But, until you have a metabolising, self replicating entity with coded stored prescriptive info to build a fresh unit, you do not have life-function. The evidence of actual cells is that this credibly requires north of 100 k bits of info, 100 times as many bits as will exhaust the capacity of the observed cosmos to search. Every bit beyond 1000 DOUBLES the number of possibilities. To get to novel body plans or novel organs like the avian lung or wing, we routinely will be dealing with millions to hundreds of millions of bits of further information, based on genome sizes. Think about how easy it is to be fatally ill or deformed if the parts are relatively slightly wrong. In short, the issue of isolated islands of function T is an empirical reality, not a begged question. And the notion that since people look at function and recognise it, and reason about it this means consciousness is involved so we can dismiss is frankly a priori ideology serving rhetoric in the teeth of abundant experience. But, a man convinced against his will will be of the same opinion still. What we need to do is to look on and ask if these objectors compose their posts by hiring monkeys to peck away at random. Or, fix cars by grabbing and shaking around parts at random, or the like. It is only when the sheer bankruptcy and absurdity of the positions one has to take to reject the implications of FSCI are seen and recognised that those who cling to such positions will see that they are only exposing themselves. As to the notion that FSCI is somehow already a question-begging, let me put it this way. we do know of things where function can be reached by chance based random walks, indeed we have repeatedly pointed out the case of chance based text of up to 24 ASCII characters. there is absolutely no hard roadblock to FSCI by chance. But, sometimes, when something is sufficiently isolated in the field of possibilities, such a search strategy will be so unlikely to succeed that it is operationally impossible. Darwinian type changes can and do account for cases of adapting functional body plans but he challenge is to explain the origin of these plans. That is what we just are not seeing, on evidence from real observations. The FSCI challenge and the needle in the haystack or infinite monkeys analysis tell us why. But, if we are sufficiently determined not to accept that, we can always make up objections and dismissals. But, to make an empirical demonstration to the contrary is a different matter. Remember, too, the only -- and routinely -- observed source of FSCI. (Let the objectors provide a genuine counter example, if they can. The latest try was geoglyphs, and just before magical computer simulation watches that did not have to face the real world problems of getting correct parts made that work in three dimensions with real materials with all the headaches implied. As in, a gear that works -- functions -- as a part of a clock (or a fishing reel -- just a few sand grains are enough to show why) is NOT a simple object.) KF kairosfocus
I concur again. I note, that there seems to be a problem with accepting an empirically massively confirmed fact, that when we have a large enough list of y/N questions to specify a system, each of these is doubling the number of possibilities. So, when we get about 500, we have 3 * 10^150 possibilities, where the atomic resources of our solar system cannot go though more than 10^102 states to date of 10^57 atoms, most of which are in the sun, as in 98%. By far and away, most of the sets of Y/N answers will not do the job in view that is relevant to our interest. So, if we are only at most able to sample a very small fraction, the problem is to arrive at shores of function where whatever hill climbing improving algorithm can kick in. But if you refuse to accept that specific requirements of function are very restrictive, you lock out seeing this, So we see the real begged question relative to a vast database of experience. I wonder, have these folks ever had to say, wire together a motherboard, soldering up the connexions and get the thing to work? Have they ever had to design then build and get to work a complex electronic or mechanical system? Did they ever see what a little salt and sand in the wrong place can do to a fishing reel? Did they ever pull one apart and have to put it back together again, or do the same with a bicycle, or a car or the like? Did they ever build a reasonably complex bit of furniture? Did they ever draw an accurately representative facial portrait? or carve an accurate portrait as a statue? My distinct impression is that we are talking to inexperience of the reality of functionally specific complex organisation and what it takes to get there. KF kairosfocus
kf, you write:
m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify: ? = – log2[10^120 ·?S(T)·P(T|H)]. ? is “chi” and ? is “phi”
Can you explain how you estimate P(T|H) for a given observed pattern? Elizabeth Liddle
First of all, you (and Elizabeth) are still not understanding my definition of dFSCI and my procedure. You still equivocate its nature, its purpose, and its power.
I don't think we "equivocate" gpuccio, unless that word means something else in Italian. But I certainly have not understood it if what you say below is what it means:
dFSCI is an empirical concept. The resoning goes this way, in order a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence. b) We define dFSCI as such a property. c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives.
But how do we execute this "evaluation of dFSCI"?
d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information). The point is, it is not important how we define dFSCI: the important thing is that it works. There are conceptual reasons why we define dFSCI as we do, but in the end they would beof no value it dFSCI did not work. It does work, and we can verify that on all the onbjects for which we have a reasonable certainty that they were designed or not. So, again, it is an empirical procedure. And it works.
But you still haven't told us how to evaluate it! Or is there a link I have missed? If so, could you link to the actual formula? Or, if it is not an actual numerical value, then can you explain what it actually is? Elizabeth Liddle
GP: I concur. I add, part of this is implicit logical positivism, that does not know that it is self referentially incoherent and bankrupt. Multiply by any number of vicious and unrecognised infinite regresses and self-referential loops. Reductio ad absurdum to the max, KF kairosfocus
eigenstate: You say: There is no such thing as a “purely random system”. “System” implies structure, constraint, rule, and process. But that’s not just being pedantic on casual speaking on your part, it’s the core problem here. The AA sequence is not thought to be emergent in a random way. You are not only being pedantic here, you are being illogical! One thing is that you say that "there is no such thing as a purely random system". All another thing that you state that "The AA sequence is not thought to be emergent in a random way". The second thing is not the consequence of the first, and yet you seem to connect them! Indeed, you even say that " it’s the core problem here". Wow! So, I will treat the two things separately, because separate they are. Of course ther are random systems. I have debated the thing in detail with Elizabeth. A random system is a physical system whose behaviour we can best describe by a probability distribution. It's very simple. The tossing if a die is a random system. You have no way to realistically describe each sigle result thorugh necessity laws (although each single result is certainly deterministic), but stil you can describe well enough the genral behaviour of the system through a probability distribution. OK? Then you say that the sequence of AAs in a functional protein "is not thought to be emergent in a random way". Well, thank for the information about neo darwinism, although I would say that it is not really correct: in neodarwinism, the sequence is "thought to be emergent in a random way", but gradually and with the help of NS. That's why, in modelling and analyzing the proposed algorithm of neodarwinism (as I have done in the second thread I linked, this one): https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/comment-page-2/#comment-413684 (posts 34 and following) I have separately analyzed the random sources of variation, and the necessity mechanisms of NS, in some detail, I would say. So, it's not that I am forgetting NS: I just need a modelling of the RV component before, and dFSCI is necessary to analyze that model. So, are you denying that in the neo darwinian algorithm, all new information is thought to emerge in a random way? It can be gradual, it can after be fixed or expanded by NS, but the only sorces of genomic variation in the algorithm are random, and acn be described and modelled only thorugh some probability distribution. If you don't agree on that, please explain clearly why. There’s a fundamental difference between one-time “tornado in a junkyard” sampling of a large symbol set from a huge phase space, and the progressive sampling of that same large symbol set as the result of a cumulative iteration that incorporates positive and negative feedback loops in its iteration. I am well aware of that. I will consider any "cumulative iteration that incorporates positive and negative feedback loops in its iteration" in my final model. Can you propose some such explicit model for the origin of protein domains? So the probability of the sequence is NOT a matter of 1 shot out of n where n is vast. Yes, it is, where only RV is acting, and where no model of necessity has been documented, shown, or simply proposed. If, in my card deck example, we keep after each shuffle, the highest two cards we find out of the 104 (per poker rules, say), and set them aside as “fixed” and continue to shuffle the remaining cards, and repeat, we very quickly arrive at very powerful and rare (versus the 104 card phase space) deck after just a few iterations. That’s brutally quick as an “iterative cycle”, but it should convey the point, and the problem with “tornado in a junkyard” type probability assignments. Now, this is really "changing the cards! If I remember well, your deck of cards example was aimed at showing that the dFSCI metric was stupid, and could not evaluate the probability of a functional result arising in a random way. Now, why are you shofting to algorithms and iterative cycles? that was not the original point at all. Do you remember? Your original point was that your random sequence was "information, organization, complex, specific, and carried out a job", and therefore it was indistinguishable form a functionla sequence, and therefore dFSCI, or similar concepts, was a fraud. That was your original concept. Iteration cycles, or algorithms, were in no way part of that. To that I have abswered in detail. Again, either show where I am wrong in that specific answer, or just admit that your example of the deck of cards was wrong. I quote your original conclusion: "So, as far as I can tell, a random shuffling of two decks of cards qualifies as a “designed” configuration, per your own criteria. The only complaint I can anticipate here is that you don’t approve of the “jobs” assigned to this configuration. If that’s the case, then I will happily rest my case on this point, because as soon as you are reduced to arguing about the telic intent of the phenomenon as a pre-condition for your metric, your metric becomes useless, a meaningless distraction. You’ve already decided what your metric hoped to address in order for you to even establish the parameters for your metric." As you can see, iteraton cycles and algorithms have nothing to do with what you said. I showed clearly: a) That the definition of a function is objective and measurable, although made by an observer, and can therefore be used to measure properties relative to that specific function b) That there is no problem of "arguing about the telic intent of the phenomenon as a pre-condition for my metric": any explicitly defined function can give a specific measure of complexity, so there is no subjective restriciton to what function can be defined; and a "post hoc" definition is prefectly legitimate, because it describes an objective property, verifiable in the lab; and the recognition of a function in no way assumes that the function is designed. Therefore, I believe that I have answered all your objections for this particular point. This is, again, where the question-begging obtains. If you are going to assert that it is only “functionally specified” if it’s the product of intelligent choices or a will toward some conscious goal, then (d)FSCI *is* a ruse as a metric, not a metric toward investigating design, but a means of attaching post-hoc numbers to a pre-determined design verdict. As I have clarified, I have never said such a thing. The definition of the function in itself in no way affirms design. It is only the empirical association of complex functions (dFSCI) with design that is the basis for the empirical use of dFSCI. Which just demands a formalism around “functionally specific”? That seems to be the key to what you are saying. No special formalism is necessary. If we agree that a function is objectively there, we just measure its complexity. What formalism are you speaking of? You agreed that the enzymatic function I described was an objective fact, verifiable and measureablk in the lab. I just need to measure the probability of that fucntion emerging in a random way. What formalism do I need to do that? Please, just explain why it should be wrong to measure the informational complexity of a function that is objectively there. Can you point me to some symbolic calculus that will provide some objective measurement of a candidate phenomenon’s “functional specifity”? Why should I? I have defined the function for an enzyme, and you agree about that. You have proposed some definitions of functions for your random sequence. I have accepted them. But I have simply shown that their functional complexity is almost 0, because all random sequences can implement the functions you proposed. The concept is really simple (except for darinists): a defined function divides the search space into two subsets: the sequences that implement that function as defined, and those that do not. That is inevitable, and very simple. The function of having a specific enzymatic activity divides the set of AA sequences of a certain length into two subsets: those that have that activity, and those that do not have it. The ratio of the numerosity of the first set to the nulerosity of the search space is the dFSCI for that function. The same is true for your functional definitions. If you define the function as "being useful to crypting data", then any random sequence will have the same utility. dFSCI is practically absent, because the functional space is almost equal to the search space (although I am not a cryptation expert, I am assuming here that ordered sequences are less useful for that). It's as simple as that: there is no subjectivity here. Any defined function is valid, but each defined function will have a specific dFSCI. And defining a function in no way implies design. If you cannot, and I think you cannot, else you’d have provided that in lieu of the requirement of a conscious observer who “recognizes” functional specificity, then I think my case is made that you are simply begging the question of design in all of this, and (d)FSCI is irrelevant to the question, and only a means for discussing what you’ve already determined to be designed by other (intuitive) means. Your case is not made at all, as shown in detail in my arguments. And again, you are wrong: the initial definition of function does not imply design. There is no question begging, and there is no circularity. This renders dFSCI completely impotent on the question of design, then! That requirement — that a “conscious observer must recognize and define a function in the digital sequence” — you’ve already past the point where dFSCI is possibly useful for investigation. Never mind that the requirement is a non-starter from a methodological standpoint – “recognize” and “defined” and “function” are not objectively defined here (consider what you’d have to do to define “function” in formal terms that could be algorithmically evaluated!), even if that were not a problem, it’s too late. Always the same errors, repeated again. dFSCI, per what you are saying here, cannot be anything more than a semi-technical framework for discussing already-determined design decisions. No. There is no "already-determined design decision" at alkl in the procedure. As I have shown. And even then, you have a “Sophie’s Choice” so to speak in terms of how you define “function”. Either you make it general and consistent, in which case it doesn’t rule out natural, impersonal design processes (i.e. mechanisms materialist theories support), or you define ‘functional’ in a subjective and self-serving way, gerrymandering the term in such a way as to admit those patterns that you suppose (for other reasons) are intelligently design, and to exclude those (for other reasons) which you suppose are not. Would you like to motivate what you are saying? I have defined the function of some specific enzyme as being able to accelerate some specific biochemnical reaction (of course we have to specify which enzyme, which reaction, abd the minimal accelaration that must be observable in standard lab conditions). What could be more "objective" than that? Could you please explain why that would be "a subjective and self-serving way"? Or "gerrymandering the term in such a way as to admit those patterns that you suppose (for other reasons) are intelligently design". And where am I "excluding those (for other reasons) which I suppose are not"? It seems that biologists are doing those strange things all the time, because that's what they do when they compile the "function" section in their protein databases. Is all biology made by a group of gerrymandering IDists? Is that your idea? Moreover, I have excluded absolutely nothing. Do you want to define the function of that enzyme differently? Be my guest. If you give an objective and measurable function, we will measure dFSCI for that definition. Where is the problem? You have given functional definitions for your random sequence of cards, and I have measured dFSCI for those definitions, finding it absent. Where is the problem? What am I excluding? I think you are close to getting my point. A random sequence is highly function, just as a random sequence. True. But all random sequences implement the same function. The functional space is huge. It’s as information rich as a sequence can be, by definition of “random” and “information”, which means, for any function which requires information density — encryption security, say — any random string of significant length is highly functional, optimally functional. Yes. In the same way. All of them are optimally functional. So, the dFSCI is almost zero. It is the ration of all random strings that you can use that way to all possible sequences. Why is it so difficult for you to understand that point? If my goal is to secure access to my account and my password is limited to 32 characters, a random string I generate for the full 32 characters is the most the most efficient design possible. Yes, and any random string of 32 characters will do. Function, but not complex function. Sometimes the the design goal IS random or stochastic input. Not just for unguessability but for creativity. I feed randomized data sets into my genetic algorithms and neural networks because that is the best intelligent design for the system — that is what yields the optimal creativity and diversity in navigating a search landscape. Anything I would provide as “hand made coaching” is sub-optimal as an input for such a system; if I’m going to “hand craft” inputs, I’m better off matching that with hand-crafted processes that share some knowledge of the non-random aspects of that input. Perfectly true. I agree with those reflections about design. But what have they to do with our discussion? When you say “That is not a functional specification. Or, if it is, is a very wide one.” I think that signals the core problem. It’s only a “wide” specification as a matter of special pleading. It’s not “wide” in an algorithmic, objective way. If you think it is, I’d be interested to see the algorithm that supports that conclusion. It's very simple. Your functional definition was "wide" only because any random sequence would satisfy it. That can certainly be verified algorithmically. So, being "wide" is in no way a crime: the simple consequence is that the value of dFSCI becomes very low. Which is just to say you are, in my view, smuggling external (and spurious) design criteria into your view of “function” here. This explains why you do not offer an algorithm for determining function — not measuring it but IDENTIFYING it. If you were to try to do so, to add some rigor to the concept, I believe you would have to confront the arbitrary measures you deploy and require for (d)FSCI. If I’m wrong, providing that algorithm would be a big breakthrough for ID, and science in general. As already said, I am not smuggling anything, Any definition, or recognition, has the same potential value. The only thing required is that the defined function is objectively observable and measurable. What am I "smuggling"? What is "spurious"? And what measure is "arbitrary"? The way to measure the defined function must be explicitly given. Then, the measurement is objectively made for that function abd that method of measurement. The only number that should be conventionally accepted as appropriate for the system we are considering is the threshold of dFSCI that allows us to infer design. There is an objective reason for that. The threshold must take into account the real probabilistic resources of the system we are studying. That's why I have proposed a threshold of 150 bits for biological systems on our planet. While Dembski has proposed his UPB of 500 bits for the general design inference in the whole universe. The concept is the same. The systems considered are different. The thresholds proposed (that can be discussed at any moment, and changed at any moment, as is true of any empirical threshold for scientific reasoning) have been proposed because reasonably appropriate for the probabilistic resources of the systems considered. gpuccio
Re: ES, insisting on using the other thread:
@kf, I HAVE substantiated the judgments of your comments as nonsense, handwaving, and question-begging. Several of the longish posts that took up your items 1-5 went to depth in support of the charge that you are question, begging, for example. I did not offer a curt dismissal like this: “More rubbish” That was the sum of your analysis in reply to one of my longish posts. So clearly you don’t have any problems with curt, substance free and dismissive responses in the first place, and I’ve invested a good amount of time and effort in presenting points and a rationale that support my assessment. “More rubbish” isn’t very persuasive, and comes off as lazy, as a response, so I’ve been putting some work in there to support the accusations.
Evidently ES has not realised that at this stage I do not give 50c for what would be persuasive to him or ilk. Nothing, as the problem is not the merits of fact and logic. As it stands, he has plainly refused to attend to a summary of the basic relevant info theory, from Sect A the always linked, and appendix 1, not to mention the discussion in the IOSE and in this ID foundations series -- all of which predate his objections and which in aggregate are hundreds of pages. So it looks like the issue of non-responsiveness is really on the other foot, doesn't it. For record, let it be known that information theory is a well established discipline and information is a well known, quite familiar item, measured in bits or the like. And, bits that do specific jobs that depend on fairly specific configurations that can be independently and simply described can be termed functionally specific bits, Like those behind the text of this post. One of the problem here is that formation of concepts and definitions by key examples and unifying general summaries is not well understood or accepted by those addicted to operational definitions as a one size fits all demand -- they don't know that logical positivism went belly up over 50 years ago. Of course, if we applied the regress of demanding an operational definition of an operational definition and then onward every key term that emerges, we soon enough see that operational definitions are not the be-all and end-all of definition or understanding or reasoning. And one can bellow about question-begging till the cows come home, a description of something that is observational reality trumps a regress of operational definitions any day. As in, I am POINTING TO and DESCRIBING that which is observational, and the cases are very important. Definition by key cases and family resemblance, backed up by reasonable description and modelling. And, when one is able to use paradigm cases to show how we assign say S = 1, or I to a given value in the Chi_500 expression, that should be enough for a reasonable person. As in, has ES even looked at the way Durston's fits values -- and that means functional bits -- were developed empirically, and how that can then fit in with the Chi_500 expression, or how the case of the geoglyphs in recent days shows how we can use the same expression in a nodes and arcs context? The goal of definition, in any case is clarity, not to exclude what one does not want to face. Next there seems to be a problem with the concept of a phase space cut down to a state or configuration space. I suppose this has been going on in physics and related fields for about 150 years since Gibbs? Cutting to the chase scene, the idea is that when we have something like a string of 500 bits length, there are a great many patterns that are possible, from 000 . . . 0 to 111 . . . 1, i.e. about 3*10^150 of them; that's a case of W. And as has been pointed out and linked, the Planck time quantum state resources of our observed cosmos max out at about 10^102 states of 10^57 or so atoms, so the solar system can sample about 1 in 10^48 of this zone; many of these being dynamically connected. In short, we cannot exhaust the possibilities. And, as sampling theory tells us a blind sample will only be likely to pick up the bulk of W, not any isolated and definable zone, T. Do such zones, T exist? Obviously, when we write text in coherent and correct English responding to the theme of this thread, we are in a zone that we have just defined independently of listing cases E1, E2, E3, etc from T in W. But the very rules that allow us to write in English and in so doing respond to the context here, immediately sharply constrain the acceptable sets of 500 characters. In short, we have here a case in point in reality, so a zone T is a reality, and since T is isolated per the implied constraints, getting there by a chance dominated blind process will be maximally unlikely on the gamut of our solar system. Exercises have actually been done to random generate text, and the result is that up to about 24 ASCII characters is feasible so far. That is a long way from the 72 or so that 500 bits covers. All of this has been pointed out to ES and ilk, over and over, just they are not paying any attention. When one is talking empirical realities one is not begging questions. And, I note that consistently, when concrete examples have been used to pin down generic explanations, they have been ducked. (I have given simple explanatory summaries, and these were pounced on as question-begging. Sorry, I am summarising realities, and have actually given cases in point, with links. And if you want the underlying first level analysis, I have pointed to that, at dozens of page length too. Indeed, ES was invited to respond to that but has ducked to date. Since that underlying analysis is fairly standard and is the backbone of a major industry or two, it should be clear that it is not on trial, he is.) Next, we could give specific cases like a car engine or a D'Arsonval moving coil instrument -- a specialised electric motor commonly used in instrumentation. It is absolutely a commonplace result of engineering that to get something like that to work, you have to have the right parts in the right configs, which are quite, quite specific. You will never succeed in making such an instrument by passing the equivalent of a tornado though a junkyard. In short, chance combinations are maximally unlikely to find zones T in w. And sneering dismissals of Sir Fred Hoyle as being fallacious simply show up the want of thought that has gone into the sort of live donkey kicking a dead lion involved. Now of course, I have a reason to have picked a motor, as the ATP synthetase and the bacterial flagellum as well as kinesin are all nanotech motors in the living cell. They are based on proteins, which are of course, amino acid strings, with highly specific sequences coming from deeply isolate fold domains in the space of AA chains. the AA chains in turn are coded in DNA and are algorithmically expressed using several -- dozens really -- of molecular machines in the cell. A cell that is a metabolising automaton that takes in energy and components, and transforms them into the required functions fulfilled by the machines in the cell. We happen to have highlighted cases of motors and storage elements. the ribosome is a nano factory that makes the proteins. And, all of this has a von Neumann self replicator as an additional facility that allows it to self replicate. Al of this is positively riddled with functionally specific, wiring diagram organisation, that has to have clusters of the right parts in the right places to work. As has been shown over and over again, and as can be observed. I won't bother to go on about Scott Minnich and empirical demonstration of the irreducible complexity of the flagellum. All I will say is that the machines involved anf heir protein components all are based on plainly functionally specific parts assembled in step by step code based ways, i.e we are at GP's dFSCI. That is there is a mountain of empirical evidence about the reality of FSCI int eh cell, and in life forms built up from cells. One can brush it aside if one wants, but that does not make it go away. We are looking at cases E from zones T in vast spaces W. And, those spaces are well beyond the search capcity of blind happenstance and mechanical firces ont eh scope of our solar system or the observed cosmos. If ES or the like want to dispute that, let them actually simply show us cases where similarly complex entities spontaneously form in the real world, by blind chance and mechanical necessity, Enough of verbal gymnastics. At multicellular level, let us try out the formation of the avian lung, from the bellows lung, by observed stepwise advantageous changes. Likewise, let's see how the eye came about stepwise, addressing the way incremental changes happend by mutations etc, and highlighting how each step was advantageous in a real environment niche and led to a population dominance then succession etc. SHOW it don't give us just so stories with a few samples/ they obviously have not done so. So, they do not have an empirically warranted theory of body plan origin beyond the FSCI threshold on chance plus necessity. never mind what they can impose by pushing a priori materialism. But, we can show that design routinely gets us to FSCI. Indeed, we can easily show that a sample from a field that is small in scope will be maximally unlikely to pick up isolated zones. That's what he problem of searching for a needle in a haystack is all about, and that just happens to be the foundation stone of the statistical form of the second law of thermodynamics. I trust, onlookers, that I have given you enough to see why at this stage I am not taking the fulminations of an ES particularly seriously, absent EMPIRICAL demonstration. Just as, in thermodynamics, I demand that you SHOW me a perpetual motion machine before I will believe it. GEM of TKI kairosfocus
eigenstate: I disagree with you (and with Elizabeth). I am sure I will not convince you, but I have the duty to answer you anyway. First of all, however, you must be consistent with what you say, and admit, if you want, when you have said a wrong thing. So, I am afraid I have to start with one thing where you are obviously wrong, and you have not admitted it. The lottery example. I wrote: "Let’s go to your examples of the lottery and the deck of cards. The example of the lottery is simply stupid (please, don’t take offense). The reason is the following: in a lottery, a certain number of tickets is printed, let’s say 10000, and one of them is extracted. The “probability” of a ticket winning the lottery is 1: it is a necessity relationship. But a protein og, say, 120 AAs, has a search space of 20^120 sequences. To think that all the “tickets” have been printed would be the same as saying that those 20^120 sequences have been really generated, and one of them is selected (wins the lottery). But, as that number is by far greater than the number of atoms in the universe (and of many other things), that means that we have a scenario where 10000 tickets are printed, each with a rnadom numbet between 1 and 20^120, and one random number between 1 and 20^120 is extracted. How probable is then that someone “wins the lottery”?" It seems clear, isn't it? The lottery example is wrong and stupid. To this, exactly to this, you answer: No one I’ve ever read on this supposes that all the possible permutations have been generated, nor that they need be generated for the theory to hold. Note the phase space for for the double deck of 104 cards – there are 10^166 possible sequences there, more combinations than you amino acid sequences. Now, that's obviously not fair. You are evading my comment. You are not discussing at all the lottery example, and shifting to the deck of cards. But my comment on the deck of cards were not those you quoted. So, please, answer my comment on the lottery example, ot just admit that uou were wring in making that argument. That would be the only fair behaviour possible. Now, let's go on, in order.
It’s the “conscious observer to recognize it and define it” part that is the big problem, here. The reaction acceleration is not a problem — I have no issues with identifying such a reaction as an objectively observable physical process. But the metric fails to be an objective metric if it depends on “conscious recognition”. If you think about why “conscious recognition” is required here by you, it should be evident that such “non-algorithmic steps” are needed because it defies objective formalization. Or to put a more fine point on it, it enables us to put our own subjective spin on it, and not just as a qualitative assessment around the edges, but as a central predicate for the numbers you may apply. That;’s why I say this is question-begging in its essence; unless one BEGINS with prior recognition (“conscious observer” recognizing function), the metric doesn’t get off the ground. If you BEGIN with such conscious recognition, the game’s over, and FSCI, or whatever acronym you want to use to apply to this idea, won’t tell you anything you haven’t already previously concluded about the designedness (or not) of any given phenomenon.
I would say this is your main argument. Probably the only argument. And it is completely wrong. First of all, you (and Elizabeth) are still not understanding my definition of dFSCI and my procedure. You still equivocate its nature, its purpose, and its power. dFSCI is an empirical concept. The resoning goes this way, in order a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence. b) We define dFSCI as such a property. c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives. d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information). The point is, it is not important how we define dFSCI: the important thing is that it works. There are conceptual reasons why we define dFSCI as we do, but in the end they would beof no value it dFSCI did not work. It does work, and we can verify that on all the onbjects for which we have a reasonable certainty that they were designed or not. So, again, it is an empirical procedure. And it works. You say: The reaction acceleration is not a problem — I have no issues with identifying such a reaction as an objectively observable physical process. But the metric fails to be an objective metric if it depends on “conscious recognition”. I am happy that the reaction acceleration is not a problem for you. So, we need conscious recognition to find something that is an objective physical process. Why? It is very simple. All science is based on consscious recognition of objective physical processes. Who do you think recognized gravity, and found the laws that explained what he had recognized? An algorithmic process? Have you ever heard of Newton? Who recognized relativity? Have you problems with the theory because a conscious being was the originator of it? Would you refuse to do experiments about gravity because the definition of gravity required a conscious observer to be given? This kind of reasoning is absolute nonsense, and it is really sad for me that intelligent persons like you and Elizabet go on defending nonsense. However, sad or not sad, I go on. The role of the conscious observer is very simple: there is a protein that has an objective function, but that function can only be described by a cosncious observer. Why? Because only conscious observers recognize and define function. An algorithm can perfectly recognize some specific function, aftfer a conscious observer has programmed, dorectly or indirectly, the properties of that particular function in the algorithm. But not before. You yourself say: "If you think about why “conscious recognition” is required here by you, it should be evident that such “non-algorithmic steps” are needed because it defies objective formalization." That's it. That's exactly my point. "Function", as you say, "defies objective formalization". It's perfectly true. Function is related to the conscious experience of puprose, and all conscious experience in essence "defy objective formalization". That's exactly the point I have discussed many times with Elizabeth, and that she vehemently denies. And so? What shall we do? We build our science suppressing the concept of function, because it "defies objective formalization"? We build our science suppressing the concept of "meanign", because it "defies objective formalization"? We build our sceince suppressing the concept of "objective formalization", because it "defies objective formalization"? How do you define "formalization" without the concepts of meaning, of cause and effect, and many others, that require a cosncious being to be understood? Absolute nonsense! if the function is objectivelt definable, as you admit, and objectively measurable, as you admit, that's it. The fact that it has been recognized and defined by a conscious observer has no importance, because it works, and the property that depends on that function, its inherent complexity, is there for us to measure, and that measure is an empirical marker of design. Would you say that I cannot diagnose a leukemia by looking at a bone marrow smear and seeing that it is made of blasts, just because I am a conscious being, and I am not giving an algorithmic formalization of what a blast is? If my diagnoses are always right, that just means that I know how to recognize a blast. Or to put a more fine point on it, it enables us to put our own subjective spin on it, and not just as a qualitative assessment around the edges, but as a central predicate for the numbers you may apply. That's not true. Once we define a function, the definition is objective, and the measurement of dFSCI is objective. Of what "subjective spin" are you speaking? The choice of the function to define? But, as I have alredy dais, we can define any function we want: the meausrement of dFSCI will be for that function, and the methodological use of that measurement will be only in relation to that function, as I have shown commenting your deck of card example. That;’s why I say this is question-begging in its essence; unless one BEGINS with prior recognition (“conscious observer” recognizing function), the metric doesn’t get off the ground. Again nonsense. The metric is a metric applied to a function. Why should it "get off the ground", if we don't say in advance to which function we are applying it? I think you are really violating the essential fundaments of reasoning here. If you BEGIN with such conscious recognition, the game’s over, and FSCI, or whatever acronym you want to use to apply to this idea, won’t tell you anything you haven’t already previously concluded about the designedness (or not) of any given phenomenon. Again, completely wrong. You are equating "recognizing a function" with "affirming that that function is designed". That's completely wrong, and if you don't understand that, you will never understand dfSCI or ID. Let's try this way. A random mutation in a protein (certainly a random event) can confer a new function. It is rare, it is usually true only in very special circumstances, but it is true. Let's take the classicla example of S hemoglibin and malaria resistance. The idea is, although S hemoglobin is a disease, it gives some protection form malaria. We can certainly define that as a function, recognizing it, even if we are conscious observers! So, we have a function that has been generated by a random system (random single point mutations), and than partially selected through NS. OK? As you can see, I am recognizing a function, defining it, and still in no way I am assuming that it is designed. The point is: is that function complex? And the answer is: no, its complexity is just 4.3 bits. So, while I have recognized and defined a function, I have no reason to infer design from it. Therefore, all your reasoning about "question-begging" is nonsense: it is wrong, makes wrong interpretations of what I have very clearly stated, and means nothing. Well, I will go on in next post. gpuccio
And just to get things back on track: I too am interested in seeing a clear operational definition for either gpuccio's or kairosfocus's metric, i.e. one that does not rely for one of its terms on some subjective evaluation of designed-lookingness. There may well have been one posted, but I haven't seen it. All the ones I have seen have white spaces in them, as it were. We are all programmers here, I think (in some sense or other). So we know that if you are going to code somethign you need an actual parameter or variable to put into our functions, not a string :) I think we can all agree that "Looks designed" is a string. Elizabeth Liddle
oops, messed out the tags, though. Original is here. Elizabeth Liddle
Eigenstate wrote:
@gpuccio, That’s the basic point. You are wrong here. Defining explicitly a function that can be objectively neasured does generate a functional subset in the set of possible outcomes. As the function objectively exists, you cannot say that we have invented if “post hoc”. I will refer to the function of an enzyme that amazingly accelerates a biochemical reaction, that otherwise could never happen or would be extremely slow. That function is objective. We need a conscious observer to recognize it and define it (because the concept itself of function is a cosncious concept). So, I am not saying that there is not a subjective aspect in the function. There is, always. What I am saying is that the function, once recognized and defined consciously, can be objectively observed and measured ny any conscious observer, for instance in a lab. It’s the “conscious observer to recognize it and define it” part that is the big problem, here. The reaction acceleration is not a problem — I have no issues with identifying such a reaction as an objectively observable physical process. But the metric fails to be an objective metric if it depends on “conscious recognition”. If you think about why “conscious recognition” is required here by you, it should be evident that such “non-algorithmic steps” are needed because it defies objective formalization. Or to put a more fine point on it, it enables us to put our own subjective spin on it, and not just as a qualitative assessment around the edges, but as a central predicate for the numbers you may apply. That;’s why I say this is question-begging in its essence; unless one BEGINS with prior recognition (“conscious observer” recognizing function), the metric doesn’t get off the ground. If you BEGIN with such conscious recognition, the game’s over, and FSCI, or whatever acronym you want to use to apply to this idea, won’t tell you anything you haven’t already previously concluded about the designedness (or not) of any given phenomenon. So, I can compute the probability of such a sequence, with such a property, emerging in a purely random system. There is no such thing as a “purely random system”. “System” implies structure, constraint, rule, and process. But that’s not just being pedantic on casual speaking on your part, it’s the core problem here. The AA sequence is not thought to be emergent in a random way. There’s a fundamental difference between one-time “tornado in a junkyard” sampling of a large symbol set from a huge phase space, and the progressive sampling of that same large symbol set as the result of a cumulative iteration that incorporates positive and negative feedback loops in its iteration. So the probability of the sequence is NOT a matter of 1 shot out of n where n is vast. If, in my card deck example, we keep after each shuffle, the highest two cards we find out of the 104 (per poker rules, say), and set them aside as “fixed” and continue to shuffle the remaining cards, and repeat, we very quickly arrive at very powerful and rare (versus the 104 card phase space) deck after just a few iterations. That’s brutally quick as an “iterative cycle”, but it should convey the point, and the problem with “tornado in a junkyard” type probability assignments. Let’s go to your examples of the lottery and the deck of cards. The example of the lottery is simply stupid (please, don’t take offense). The reason is the following: in a lottery, a certain number of tickets is printed, let’s say 10000, and one of them is extracted. The “probability” of a ticket winning the lottery is 1: it is a necessity relationship. But a protein og, say, 120 AAs, has a search space of 20^120 sequences. To think that all the “tickets” have been printed would be the same as saying that those 20^120 sequences have been really generated, and one of them is selected (wins the lottery). But, as that number is by far greater than the number of atoms in the universe (and of many other things), that means that we have a scenario where 10000 tickets are printed, each with a rnadom numbet between 1 and 20^120, and one random number between 1 and 20^120 is extracted. How probable is then that someone “wins the lottery”? No one I’ve ever read on this supposes that all the possible permutations have been generated, nor that they need be generated for the theory to hold. Note the phase space for for the double deck of 104 cards – there are 10^166 possible sequences there, more combinations than you amino acid sequences. The question is not a math question, wondering how likely 1 chance in 20^120 is, that’s evident in the expression of the question. The question is the “recipe” for coming to an AA sequence that achieves something we deem “functional”. If you have a a cumulative filter at work – environmental conditions which narrow the practical combinations in favorable ways, stereochemical affinities that “unflatten” the phase space so that some permutations are orders of magnitude more likely to occur, including permutations that contribute to the functional configuration we are looking at, then the “1 in 20^120? concern just doesn’t apply. It’s not an actual dynamic in the physical environment if that’s the case. Or, cumulative iterative processes with feedback loops completely change probability calculations. That is why scientists laugh at the absurd suggestion that these processes are like expecting a tornado in a junkyard to produce a 747. Your “100000 in 20^120? depends on this same kind of simplistic view of the physical dynamic. So, I will simply state that your sequence has no dFSCI, because it is not functionally specified, and that therefore we cannot infer design for it. You say: Is it SPECIFIC? Yes, it is a single, discrete configuration out of a phase space of 104! = 10^166 available configurations. This configuration is as constricted as the choices get. Well, specific does not mean functionally specified. Each sequence is specific. If used as a pre-specification, each sequence is a good specification. But that has nothing to do with functional specification, that can be used “post hoc”. This is, again, where the question-begging obtains. If you are going to assert that it is only “functionally specified” if it’s the product of intelligent choices or a will toward some conscious goal, then (d)FSCI *is* a ruse as a metric, not a metric toward investigating design, but a means of attaching post-hoc numbers to a pre-determined design verdict. Which just demands a formalism around “functionally specific”? That seems to be the key to what you are saying. Can you point me to some symbolic calculus that will provide some objective measurement of a candidate phenomenon’s “functional specifity”? If you cannot, and I think you cannot, else you’d have provided that in lieu of the requirement of a conscious observer who “recognizes” functional specificity, then I think my case is made that you are simply begging the question of design in all of this, and (d)FSCI is irrelevant to the question, and only a means for discussing what you’ve already determined to be designed by other (intuitive) means. That is not a functional specification. Or, if it is, is a very wide one. I will be more clear. According to my definition od dFSCI, the first step is that a conscious observer must recognize and define a function in the digital sequence, and specify a way to objectively measure it. Any function will do, because dFSCI will be measured for thet function. IOWs, dFSCI is the complexity necessary to implement the function, not the complexity of the object. That is a very important point. This renders dFSCI completely impotent on the question of design, then! That requirement — that a “conscious observer must recognize and define a function in the digital sequence” — you’ve already past the point where dFSCI is possibly useful for investigation. Never mind that the requirement is a non-starter from a methodological standpoint – “recognize” and “defined” and “function” are not objectively defined here (consider what you’d have to do to define “function” in formal terms that could be algorithmically evaluated!), even if that were not a problem, it’s too late. dFSCI, per what you are saying here, cannot be anything more than a semi-technical framework for discussing already-determined design decisions. And even then, you have a “Sophie’s Choice” so to speak in terms of how you define “function”. Either you make it general and consistent, in which case it doesn’t rule out natural, impersonal design processes (i.e. mechanisms materialist theories support), or you define ‘functional’ in a subjective and self-serving way, gerrymandering the term in such a way as to admit those patterns that you suppose (for other reasons) are intelligently design, and to exclude those (for other reasons) which you suppose are not. So, trying to interpret your thought, I could define the following fucntions for your sequence: a) Any sequence of that length that can be statistically analyzed b) (I don’t know, you say: I don’t understand well your second point) c) Any sequence of that length that can be good for encrypting data. While I wait for a clarification about b) (or for other possible definitions of function for your sequence), I will notice that both a) and c) have practically no dFSCI, because any sequence would satisfy a) (the functional space is the same as the search space, and the probability is 1), ans all random sequences, for all I know of crypting data, would satisfy c) at least as well as your sequence (the functional space is almost as big as the search space, the probability is almost one). I hope that is clear. I think you are close to getting my point. A random sequence is highly function, just as a random sequence. It’s as information rich as a sequence can be, by definition of “random” and “information”, which means, for any function which requires information density — encryption security, say — any random string of significant length is highly functional, optimally functional. If my goal is to secure access to my account and my password is limited to 32 characters, a random string I generate for the full 32 characters is the most the most efficient design possible. Sometimes the the design goal IS random or stochastic input. Not just for unguessability but for creativity. I feed randomized data sets into my genetic algorithms and neural networks because that is the best intelligent design for the system — that is what yields the optimal creativity and diversity in navigating a search landscape. Anything I would provide as “hand made coaching” is sub-optimal as an input for such a system; if I’m going to “hand craft” inputs, I’m better off matching that with hand-crafted processes that share some knowledge of the non-random aspects of that input. When you say “That is not a functional specification. Or, if it is, is a very wide one.” I think that signals the core problem. It’s only a “wide” specification as a matter of special pleading. It’s not “wide” in an algorithmic, objective way. If you think it is, I’d be interested to see the algorithm that supports that conclusion. Which is just to say you are, in my view, smuggling external (and spurious) design criteria into your view of “function” here. This explains why you do not offer an algorithm for determining function — not measuring it but IDENTIFYING it. If you were to try to do so, to add some rigor to the concept, I believe you would have to confront the arbitrary measures you deploy and require for (d)FSCI. If I’m wrong, providing that algorithm would be a big breakthrough for ID, and science in general.
And I think he nails it, gpuccio. I await your response (and/or kf's) with interest :D Elizabeth Liddle
KF:
Pardon, but why do you find it so hard to see that the claimed “source” of novel bio-info, “natural selection,” is an obvious misnomer?
I don't think that selection is the "source" of the information, I think selection is what transfers information from the fitness function to the genome. As Dembski puts it in LIFE'S CONSERVATION LAW: Why Darwinian Evolution Cannot Create Biological Information:
His [Kenneth Miller's] claim that the information comes from the selective process is then correct but, in context, misleading. Miller suggests that [Schneider’s simulation] ev, and evolution in general, outputs more information than it inputs. In fact, selective processes input as much information from the start as they output at the end. In Schneider’s ev, for instance, the selective process inputted prior information in the form of a precisely specified error-counting function that served as a fitness measure.[reference 45] Thus, instead of producing information in the sense of generating it from scratch, evolutionary processes produce it in the much weaker sense of merely shuffling around preexisting information.
While this is only true for some measures of information (gripe: people in these discussions often seem to assume there's one true definition of information; they are utterly wrong), I think it's the most useful way to view what's going on here. (BTW, while I basically agree with the Dembski in the section quoted above, I disagree with his assumption that "active information" in the fitness function must itself come from intelligent sources.) Note that, while you and Dembski are both working toward the same conclusion, your arguments for that conclusion clearly conflict with each other: KF: Selection cannot add information to the gene pool. Dembski: Selection can add information to the gene pool, but only the information it gets from the fitness function. KF:
And, pardon, that is why your incremental variation on a file example also fails. The selector is looking at variations that come from something else, and it is that something else that faces the config space challenge.
No, it clearly doesn't face the config space challenge. After an hour (assuming a 64-character alphabet), you'll have an average of 56.25 characters (=337.5 bits) of fully prespecified information. After two hours, 112.5 characters (=675 bits). After an 8-hour shift, 450 characters (=2,700 bits). After a full 40-hours work week, 2,250 characters (=13,500 bits). That's a huge config space, but a reasonable period of time (long enough to go out of your mind with boredom, but still...). KF:
Again, variation within islands of function do not explain arrival at such islands. Where, once we face complex, integrated function, the credible default is that we are looking at islands of function based on complex integration of matched parts.
I think this may hint at the root of our disagreement here: you seem to be thinking that selection only takes place within well-defined "islands of function". In the first place, this is a result of an assumption about the shape of the fitness function, not (as you have stated it) an intrinsic limitation of selection. If this assumption is wrong, and the fitness function isn't just flat outside the islands, then this certainly will affect the probability of reaching an island. Second, even if the fitness function is completely flat outside of these islands of function, selection may still affect the probabilities if the islands tend to clump together -- in archipelagos of function, if I may extend the metaphor. I would argue that the existence of gene families (where the family members have different functions) at least suggests that this is the case. Islands in archipelagos are much more likely to be found than isolated islands because: 1: The probability of stumbling across some island increases proportional to the number of islands in the archipelago (and note that an archipelago may contain far more islands than we know about). 2: Once one of the islands is found, selection for it means that it tends to act as a starting point for mutational excursions, increasing the chance that more islands in the group will be found. Gauger and Axe's The Evolutionary Accessibility of New Enzymes Functions: A Case Study from the Biotin Pathway is the obvious reference here; I don't want to get distracted by discussing its merits and limitations (not my field of expertise), but just note that their estimate of the time required to evolve a new family member is far far far less than it would've been (using similar assumptions) to evolve a new enzyme from scratch. Again, the point is that you cannot legitimately dismiss the effect of selection. Worse, in order to properly take selection into account, you need to know a lot more than we actually do about the large-scale shape of the fitness function. KF:
You will notice, for instance, that you have implicitly assumed that tiny initial steps will provide relevant function. But in fact, you need to account for a metabolic, self-replicating entity that uses coded representations to carry out both processes, as the OP documents, and as the successor post further discusses here. Without that metabolic-self-replicating entity, you do not have minimal relevant function.
There are a number of different "problems" to be solved, with different constraints; I think it's best to be as clear as possible about what we're discussing at any given point. Here are some that come to mind (in roughly decreasing order of "difficulty"): 1: The origin of the first self-replicating organism. 2: The origin of completely new genes (i.e. finding the first island in an archipelago). 3: The origin of gene variants with new functions (i.e. new islands in a "discovered" archipelago). 4: Optimizing an existing function of a gene. You seem to agree that RM+NS can achieve #4. I don't claim I can prove that RM+NS can achieve #2 and #3, but I don't see a solid argument that can't (and if you want to make that argument, you must take NS into account). RM+NS clearly cannot achieve #1 because they only take place after replication get going; but I don't see solid argument that other natural (unintelligent) processes cannot achieve this. ________________ GD, kindly see the OP. If you want more on origin of life cf here, and more on origin of body plans, cf here. The islands of function issue is discussed here, kindly note that incremental changes within the sea of non-function cannot have a differential reproductive success, as 0 - 0 = 0, so we are back to blind chance variation in a vast config space. (That is, once we splash into non-function, there is no fitness gradient to guide variation, Darwinian style evo may explain niches based on adaptation of a body plan, but it does not explain origin of body plans, starting with the first ones, moving on to the Cambrian revolution, and in a world where the fossil record is dominated by "sudden" appearance, stasis and disappearance.) KF, Jan 22. Gordon Davisson
KF, Excellent. I am famous at last :) Eugene S
ES: I see you caught me before I caught myself! I have now put up the post here at UD, thanks very much. KF PS: Pardon some oddities of format, Blogger and WordPress do not work together very well. I actually ended up doing something so rough and ready as inserting breaks to get some of the worst parts! kairosfocus
The above links to the Russian text. The English version is here. Cheers. Thanks Eugene S
My pleasure. Eugene S
ES: Excellent! Folks, here [oops, here] is Dr ES's pro-con summary on ID, in English and Russian. (He was kind enough to do an English version.) GEM of TKI kairosfocus
KF, I have done what you requested. Please see my blog (the most recent post). Thanks for expressing your interest in my thoughts and as I said in the comments on my blog, I think this OP of yours is really great. Eugene S
GD: Pardon, but why do you find it so hard to see that the claimed "source" of novel bio-info, "natural selection," is an obvious misnomer? The source of the info has to be the variation, the culler-out, quite plainly subtracts from what has to already be there, variation:
Chance Variation --> Varieties; differential success --> subtraction of "less fit" varieties; surviving varieties --> establishment of new sub populations in niches
And so, the underlying issue is that you have to search a space to first get to complex integrated function -- which points to islands of function -- and then you have to face the implications of large config spaces and limited resources, such that 500 - 1,000 bits swamps the capacity of our solar system or observed cosmos. And, pardon, that is why your incremental variation on a file example also fails. The selector is looking at variations that come from something else, and it is that something else that faces the config space challenge. Again, variation within islands of function do not explain arrival at such islands. Where, once we face complex, integrated function, the credible default is that we are looking at islands of function based on complex integration of matched parts. You will notice, for instance, that you have implicitly assumed that tiny initial steps will provide relevant function. But in fact, you need to account for a metabolic, self-replicating entity that uses coded representations to carry out both processes, as the OP documents, and as the successor post further discusses here. Without that metabolic-self-replicating entity, you do not have minimal relevant function. And the evidence of living organisms, the only observational anchor point we have, points to 100,000+ bits as the minimum for that. This takes us well beyond the observed cosmos search threshold. Not to mention, that we empirically and analytically know just one competent cause of such symbolically coded complex algorithmic functionality: intelligent design. So, we are well warranted to infer that the origin of such entities is designed. And, when we see the further jump in specifically functional complexity to account for major body plans, 10 - 100+ millions of bits, we are looking at the same as the best empirically warranted explanation for the novel body plans that must unfold from the Zygote or equivalent. Within body plans yes, incremental variations (usually quite limited) do crate varieties that fit niches, we see that with dogs and red deer or cichlids, or people etc. But we must not fall into a category confusion between adaptations of a body plan and the origin of same, starting with the first one, OOL. Nor should we allow our attention to be misdirected, nor should we permit censorship under teh ideological domination of a priori materialism to distort our ability to see what is going on. GEM of TKI kairosfocus
I'm not saying "don’t look at the wizard behind the curtain, please"; I'm saying "look closely at what the wizard is doing -- he's adding information by selecting among random variants". While selection is in some senses a purely eliminative process, that doesn't mean it's eliminative in all senses. Let me give you an even simpler scenario to (hopefully) make this point clear. Consider a computer program I call the worlds worst word processor (WWWP): * it starts with an empty file * once a second, it adds a random character (letter, digit, etc) to the end of the file * at any time, the user may press the Delete key, which removes the last letter from the file * the user cannot do anything else Since the user cannot add anything to the file, only delete, it seems obvious that the user cannot add information to the file. But in fact the user can add whatever information they want: the user decides what they want to write, and anytime the program adds a character that doesn't match what they want, just press delete. When the program happens to add the "right" character, they don't press delete and the character remains in the file. Obviously, it takes a while to write anything, but not on the scale of the history of the universe. Say you have a 64-character alphabet; that means that on average it'll progress a little slower than one ("right") character per minute. A 100-character message would take almost 2 hours. But that 100-character message contains 600 bits of information, completely specified in advance by the user; that's well past all the usual probability bounds. The point here is that you cannot legitimately dismiss the effect of selection. As someone recently said, it is utterly revealing, to see how consistently hard it is for this to be seen and acknowledged. (BTW, I don't want to distract from this point, but I can't resist making the paradoxical nature of WWWP even more explicit by asking exactly when the amount of information in the file increases: When the random generator adds a character? When the generator happens to add a "right" character? When the user deletes a "wrong" character? When the user doesn't delete a "right" character (even though this is not an action but an inaction, and it doesn't change the file's contents in any way)? The answer will depend on exactly how you define and measure information, but for any sane definition at least one of these steps must add information, even though it may seem intuitively obvious that none of them do.) Gordon Davisson
GD: Let us begin with a clip from Wiki on GA's, per the principle of testimony against known interest:
In a genetic algorithm, a population of strings (called chromosomes or the genotype of the genome), which encode candidate solutions (called individuals, creatures, or phenotypes) to an optimization problem, evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. [--> the strings map to degrees of performance within a zone of function] The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated [--> per algorithmic procedures] , multiple individuals are stochastically selected from the current population (based on their fitness), and modified (recombined and possibly randomly mutated) to form a new population. The new population is then used in the next iteration of the algorithm. Commonly, the algorithm terminates when either a maximum number of generations has been produced, or a satisfactory fitness level has been reached for the population.
GA's, in short, work based on strictly limited variations WITHIN an island of function -- how else will they be able to see that the "genome" string of a particular case corresponds to a certain degree of function in a zone of functionality? Certainly, the GA and fitness function and string plus the various tuned parameters and subroutines to manipulate it do not write themselves somehow out of lucky noise then proceed to effect a wonderful uphill process to peaks of functionality. As in, don't look at the wizard behind the curtain, please. Sorry. That is, GA's are based on intelligently designed software that acts in a zone of function to find PEAKS of function, or on the dual to that, minima of cost, where the cost is some undesirable aspect of performance. In short, GA's begin within target zones in a much wider config space for bit strings of length equal to that of he overall program, so they are able to make incremental uphill progress based on built in intelligently designed procedures and an equally intelligently designed fitness function, so called -- a map of how to move uphill. In that environment, one may indeed examine a narrow ring of randomly sampled neighbouring points, and generally speaking, trust the uphill trend, culling out what points downhill. Notice, again, the selection process ELIMINATES variation per algorithmic constraint, it does not add it, that is, again, selection is NOT the source of novel information or variety. It is utterly revealing, to see how consistently hard it is for this to be seen and acknowledged. GEM of TKI PS: More details here. kairosfocus
If we look at NS, this boils down to differential reproductive success in environments leading to elimination of the relatively unfit. That is, NS is a culling-out process, a subtract-er of information, not the claimed source of information. That leaves only CV, i.e. blind chance, manifested in various ways.
If this were true, genetic algorithms could not possibly work. The standard ID explanation for the success of GAs is that information must be "smuggled in" via the fitness function. But the only way that information could move from the fitness function to the genome is by selection. If your reasoning were right, this could not happen; it does happen, therefore there's something wrong with your reasoning. (BTW, later in the essay, you said that GAs "start within a target Zone T, by design, and proceed to adapt incrementally based on built in designed algorithms." First, they generally don't start in the target zone (the entire point of a GA is to solve some problem; if you have to start with an existing solution, the GA is rather pointless). Second, at least in a pure GA, the "built in designed algorithms" are designed to simulate random variation and [artificial] selection.) Gordon Davisson
The above word chemistry exercise shows the limited range of possibilities one can achieve with only simple successive modifications. And, as Kairos points out, this is with very short strings. There's another way to look at this. It involves foresight. An organism that appears accidentally has no particular reason to continue to reproduce for indefinite number of generations. Similarly, a designer with limited foresight might choose to create a life form that could replicate for multiple generations providing utility for the lifespan of the designer. The vast separation between proteins ensures that successive slight variations will not succeed in climbing the so-called Mt. Improbable. A designer who desires the design to endure for ages could choose these vast separations to enforce stasis. To me, this latter pattern fits the life we see better than the model of accidental emergence. The evidence is in the design for endurance. dgw
Kairos, Thanks for your good thoughts. Two things about the Dawkins approach to "weasel" differ from the random typing of letters and searching to see what matches Shakespeare. Dawkins does an incremental search for closeness to his end excerpt. He knows what the specific goal is at the end. On the other hand, random typing of letters is followed by a final search for a phrase from Shakespeare. Also, intermediate results in Dawkins search are not necessarily valid words. They would be rejected by error correction. Your point about how short the strings are in the word chemistry example I pose is a good one. The one-letter distance between word changes significantly constrains the words that can be formed. It's also interesting to observe that complex words do not morph into other complex words without going through a simple intermediates. One can construct ever more complex sets of rules to build new word sets (and permit greater Hamming distances). However, at some point, the complexity of the rules will exceed the complexity of the words. One can attempt the same exercise with sentences or paragraphs. It turns out it is fairly straightforward to negate a sentence by word insertion, but changing its meaning becomes very difficult with only simple moves. Paragraphs are particularly problematic because of the built in redundancy. It's possible to insert a sentence with general background, but changing the meaning of the paragraph incrementally requires careful surgery, and as one might expect, deletions are easier than insertions. Adding information in context is harder work than removing content while preserving a grammatically correct sentence or phrase. dgw
F/N: I have added, in the OP, a diagram (courtesy Wikipedia, public domain) of a D'Arsonval galvanometer movement, and a caption. I have also given additional links on the tree of life and OOL. kairosfocus
PPS: A Jamaican peasant proverb is a suitable appendix: fire de pon mus mus tail, but him think seh a cool breeze deh deh. (Translated: the complacent mouse has a fire burning at its tail, but in its folly and delusion, it imagines that it is a cool breeze blowing on it.) kairosfocus
PS: Solomon: The laughter of fools is as the crackling of thorns under a pot. (And, think about why thorn branches crackle under the pot, and what it means for their future . . . ) kairosfocus
DGW, First, of course, Dawkins' pre-loaded target phrase in Weasel, came from Shakespeare! In TBW, as previously linked, this is what Dawkins said:
I don't know who it was first pointed out that, given enough time, a monkey bashing away at random on a typewriter could produce all the works of Shakespeare. The operative phrase is, of course, given enough time. [NB: cf. Wikipedia on the Infinite Monkeys theorem here, to see how unfortunately misleading this example is.] Let us limit the task facing our monkey somewhat. Suppose that he has to produce, not the complete works of Shakespeare but just the short sentence 'Methinks it is like a weasel', and we shall make it relatively easy by giving him a typewriter with a restricted keyboard, one with just the 26 (capital) letters, and a space bar. How long will he take to write this one little sentence? . . . . It . . . begins by choosing a random sequence of 28 letters ... it duplicates it repeatedly, but with a certain chance of random error – 'mutation' – in the copying. The computer examines the mutant nonsense phrases, the 'progeny' of the original phrase, and chooses the one which, however slightly, most resembles the target phrase, METHINKS IT IS LIKE A WEASEL . . . . What matters is the difference between the time taken by cumulative selection, and the time which the same computer, working flat out at the same rate, would take to reach the target phrase if it were forced to use the other procedure of single-step selection: about a million million million million million years. This is more than a million million million times as long as the universe has so far existed . . . . Although the monkey/Shakespeare model is useful for explaining the distinction between single-step selection and cumulative selection, it is misleading in important ways. One of these is that, in each generation of selective 'breeding', the mutant 'progeny' phrases were judged according to the criterion of resemblance to a distant ideal target, the phrase METHINKS IT IS LIKE A WEASEL. Life isn't like that. Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. [notice the underlying attitudes and dismissiveness] In real life, the criterion for selection is always short-term, either simple survival or, more generally, reproductive success. [[TBW, Ch 3, as cited by Wikipedia, various emphases, highlights and parentheses added.]
In short, Weasel is much as I summarised earlier, according to Dawkins, its author. Similarly, your chaining rules constitute intelligent constraint, implicitly moving towards a target. Where in fact the D/RNA chain uses a generic coupler along the chain, so that there is no serious sequence constraint -- that is how it can store information. Also, in the tRNA, the AA-carrier coupler is a CCA sequence that ties to the COOH end of the AA, another generic coupler. Indeed it is the loading enzyme that specifies what AA gets tied to what tRNA with what anticodon sequence, and this has been manipulated experimentally in recent days, to create novel protein chains under intelligent control. Similarly, the protein chain is based on a similar generic coupler, and so sequences of AA's are informationally constrained at the assembly point in the Ribosome, using codon- anticodon key-lock fitting and advance to the next triplet; they are not pre-programmed based on mechanical forces. It is after the AA sequence has been formed through step by step instructions that it is folded (often with help of chaperones) and may agglomerate and be subject to activation. It is that folded, agglomerated, activated form that is used in the cell, based on key-lock fitting of parts. Next, observe how short the strings you are discussing are? That defines a scope of config space that is such that the Hamming-type distance between functional sequences in the config space is low. The problem with life systems is the components are exceedingly complex -- 300 AA is a typical "average" protein length -- and so we are looking at deeply isolated isloands of funciton int eh realistically scaled spaces. In addition, starting with the prelife situation, the issue of L/R hand mixes -- thermodynamically equivalent, geometric mirror images [where geometry is important], handedness of life molecules [L proteins, R for nucleotides], and the existence of many more active cross-interfering species, in a context where the protein chains of life are energetically unfavourable [notice the ubiquity of the energy battery molecule, ATP, and the complex specificity of the ATP Synthetase "factory" molecule] all point to insuperable challenges to get to cell based life. At least, without pretty knowledgeable and skilled intelligent direction. Within such life, the focus of the OP has been on the challenge of finding functional clusters in the space of possibilities. And, for that, GP has given us a fairly sobering assessment of what we can realistically expect to see in life forms. And that is before we look at the issue of the evident use of digital (yes, discrete state MEANS digital] information to control the chemistry, as can be seen for protein synthesis. GEM of TKI kairosfocus
Continuing this bit of whimsy. Indigo, violet, and purple appear to be too complex to be constructed using the simple rules of adhesion, substitution, and insertion we have devised for our "word chemistry". Do we need a more complicated set of rules? Are there other colors that can be reached? pink = red to fed to fen to fan to pan to pin to pink (bypassing ran) black = red to fed to fen to fan to ran. ran+k = rank, rank to rink to link. b+link = blink. blink to blank to black. brown = red to rod to row. b+row = brow, brow+n = brown. tan = red to fed to fen to fan to tan gray = red to bed to bad to bay to ray. g+ray = gray white = red to rid to bid to bin to win. win+e = wine, wine to whine by insertion, whine to white by substitution. dgw
In the above comment, the pathway from "red" to "orange" passed an anchor word "ran". The path from "red" to "ran" followed the sequence--red to fed to fen to fan to ran. Another sequence exists: red to bed to bad to ban to ran. Continuing from above, is it possible to change colors from yellow to green? Again it turns out to be easier to start with red: red becomes reed by insertion. g+reed = greed. Greed becomes green by substitution. Next is blue: Blue's path goes from red through ran. Then, ran becomes pan by substitution, s+pan = span, span becomes spam becomes slam becomes slum becomes glum becomes glue becomes glue, all by substitution. dgw
This is all purely whimsical, but the analogy between language and genetic code, provides for some interesting thought experiments. Can "word drift" enable an organism to change color. Let's suppose an organism finds the word "red" in it's genetic code and therefore produces a "red" body color. (This is of course fanciful.) If there were two sites with the word "red", then one could be used to encode the body color, and the other might be free to "drift". However, we constrain drifting to be within the bounds of the dictionary so that error detecting codes will not excise the changed text. (One could also look at how error correcting codes could change the errored word into a correct word, but a different one.) If "red" can drift through a sequence of valid words until it reaches "orange", then once again it could be used to encode body color. So, is there a path of valid words between red and orange that achieves the above objective? Try this: red becomes fed becomes fen becomes fan becomes ran by substitution. ran+g = rang, rang+e = range, o+range = orange. This path is reversible. Can orange become yellow in the same way? Here's a path. Is there a shorter one? It requires a new rule: Rule 3: Insertion. A letter can be inserted into a word to make a new word. bred + a inserted between e and d = bread. It turns out it is easier to go from red to yellow then from orange to yellow. The path from orange to yellow leads through "ran". Red becomes fed becomes fen becomes fan becomes ran becomes becomes rat becomes rot becomes lot by substitution; a+lot = alot. By insertion, alot + l = allot. Allot becomes allow by substitution, f+allow = fallow, fallow becomes fellow becomes yellow, by substitution. It's interesting that in conceiving of a path, it's easier to start from the destination than from the source to find a reasonable pathway. dgw
As an aside, note that Shakespeare did not write his plays by selecting one letter at a time. He selected words from the common vocabulary (with a few neologisms) and assembled his text consistent with the rules of the grammar and meter. Of course Shakespeare knew when to break the rules for maximum effect, a computer code would follow the rules exactly. dgw
The above "word chemistry" concept may need some refinement. Can a set of rules be devised to generate a sufficient set of valid English words to use in generating Shakespeare. Let's use the now infamous "Methinks it is like a weasel." Rule 1. Adhesion. Individual letters stick together to form words. "i" + "s" = "is." If "is" is floating in the soup, then "h" could attach to form "his", "t" could attach to form "this", etc. Words can adhere to form larger compound words: heart + felt = heartfelt Rule 2. Substitution. Individual letters can replace letters in a word. From the above, "his", can become "has", or "hip" if an "a" or a "p" is encountered. OK. Let's try (out of order). "a" Trivial Check i + t = "it" Check i + s = "is" Check Path to "like": i+s = is, h+is = his, substitute "d" for "s" to become hid, hid+e = hide, substitute "k" for d to form "hike", substitute "l" for "h" to obtain "like". How probable is this path, I wonder? (Note: Word-frequency tables using this method of generating words won't match word-frequency tables of Shakespeare.) Check. Path to "weasel": a+t = at, b+at=bat, bat+s = bats, bats becomes bass and then base by substitution, base becomes ease by substitution, ease+l = easel, w+easel = weasel. Check. (Paths are not unique. What is probability of each step? Is it easier to form a word from the bottom up or by changing existing words?) Path to "methinks": Me+thinks = Methinks. m+e = me, i+s=is, h+is = his, t+his = this. This becomes thin by substitution, thin+k = think, think+s = thinks. Check. I wonder if there any "irreducibly complex" words that cannot be formed with a simple set of rules? dgw
Thanks kairos. My recollection of Dawkins's weasel is a bit different. In the case of Shakesepare, the "monkeys" type sequences of letters, and there is a "Shakespeare" filter at the end. It's that which survives. Dawkins, if I remember correctly searched for random letters and when the correct letter was found in the correct position, it was retained. His was a search for a weasel not a segment of Elizabethan drama. An interesting aside to this discussion is whether or not there is a better way to build up sequences of Shakespeare (or other works of great literature). One could imagine a soup of random letters. When "i" and "s" approach each other, they stick together to form an English word. Words precipitate to the bottom of the soup and can be filtered out. Then, Shakespeare might be constructed from random selections of English words rather than random collection of letters. If the rules of this "chemistry" are natural language rules (corrected for the time period), then perhaps this process might be more successful at generating Shakespeare. This thought experiment is ignoring the Bottom-up vs. top-down view of the world. Monkeys or random number generators might produce Shakespeare, but they would have know way of recognizing it. dgw
Joe: Thanks for sharing your thoughts. I do note, I am precisely not doing probability calcs, but identifying when sample sizes are so small relative to spaces that we have no right to expect to observe unusual, atypical outcomes. This, BTW, is exactly how the very common practice of Hypothesis testing by seeing if one is in a far skirt vs a bulk of a distribution works. GEM of TKI kairosfocus
I am with "Programming Of Life" author Dr Johnson on this one- you cannot climb Mt Improbable until you can show it exists. He said that you cannot calculate the probabilty of something you cannot show is even feasible. "They" will always laugh at posts like this because "they" say "here we are so the probabilty is moot", obviously not understanding we are trying to determine HOW we came to be here, or "you cannot calculate the probability because you do not know the formula, ie you are doing it wrong", yet they don't know the formula and and "they" don't have any idea what is right. The point being is probability calculations actually give "them" the benefit of the doubt and "they" cannot even understand that. Joe
DGW: Interesting comment. Actually, ASCII codes are plainly in use, as we see upper and lower case letters, numerals, etc. So, we can see up to about 25 letters, or 128^25 ~ 4.8 * 10^52 possibilities is a successfully searchable config space. This is in the ballpark of Borel's earlier estimates. The threshold in view starts -- for our solar system -- at 3 * 10^150, about 72 ASCII characters. For the cosmos we observe, 1,000 bits weighs in at 3 * 10^301, or about 143 ASCII characters. The monkeys are about 1 in 10^100 of the sort of spaces that need to be searched. And REAL monkeys just made several pages of mostly S's. There is a "monkeys are writing Shakespeare" exercise out there, but on closer inspection the reason we are getting Shakespeare out is because someone first put Shakespeare in. If you doubt me, observe:
Instead of having real monkeys typing on keyboards, I have virtual, computerized monkeys that output random gibberish. This is supposed to mimic a monkey randomly mashing the keys on a keyboard. The computer program I wrote compares that monkey’s gibberish to every work of Shakespeare to see if it actually matches a small portion of what Shakespeare wrote. If it does match, the portion of gibberish that matched Shakespeare is marked with green in the images below to show it was found by a monkey. The table below shows the exact number of characters and percentage the monkeys have found in Shakespeare. The parts of Shakespeare that have not been found are colored white. This process is repeated over and over until the monkeys have created every work of Shakespeare through random gibberish.
For that, the minimum number of keystrokes to get the works is 27 plus the numbers -- light up all cases if a key is pressed. Bottomline: SISO, Shakespeare in, Shakespeare out. Just as Mr Dawkins did with the notorious Weasel. GEM of TKI kairosfocus
So, how well are the monkeys doing? Wikipedia reports on the Shakespeare project. Simulated monkeys have been able to generate 19-24 characters of text that match Shakespeare's writing. They report that in one case it required: "2,737,850 million billion billion billion monkey-years" to reach 24 matching characters. If each letter of the alphabet is encoded as a 5 bit binary word, then the 500 bit limit would represent 100 alphabetical characters. dgw
GP: Thanks for your own thoughts. Some sobering numbers indeed, just for manipulating one protein, post OOL. (Though, I am more and more of the inclination that we need to look to the regulatory networks that guide unfolding of body plan across lifespan, to control the behaviour of genes, which make bricks. The problem is getting worse, as small changes to prescriptive, regulatory info can easily be devastating in impact; there have got to be a lot of built-in redundancy and safeguards, much of which we don't begin to have a clue about yet. But, just the protein codes are enough to make the key point.) And even estimates on the resources of our cosmos or solar system are leaving off the issue that most of the available atoms are H and He. C is the constraint, at ~ 0.46 % by mass of our galaxy, 0.3% of our solar system, and of course only terrestrial planets within the circumstellar habitable zone of suitable stars {roughly, the neighbourhood of class G) count. A life-friendly galaxy is not particularly dominant, and only a band of the right kind of galaxy will have a galactic habitable zone. The cosmic and even solar system estimates above are generous, noting that it takes 10^30 PTQS's for ionic chemical reactions, much less organic ones relevant to a lot of life. But 500 - 1,000 bits of complex info is well within the threshold known for cell based life; 100,000 to billions of bits. The only empirically and analytically warranted causal factor adequate to explain that much info, is design. That is the message that 60 years of discoveries on the information systems in the living cell are telling us, but for many, I am afraid, that is very hard to hear indeed. GEM of TKI kairosfocus
Hi KF, thank you for your usual precious contribution. I would like to add here the reason why I often use a personal threshold for probabilistic resources, when I discuss realistic biological scenarios. Universal probability bounds, be they 500 or 1000 bits, take into account the huge resources of the whole universe, or at least of our planetary system, considering any quantum state as a possible computational value. But, when we speak of the neo darwinian model of RV and NS, we have to do with a more restricted scenario: our planet, more limited times, biological beings, reproduction rates, mutation rates. So, some time ago I tried to propose a more realistic probabilistic resources threshold. It does not pretend to be an accurate computation, and it can be refined at any moment, but here is how it goes, more or less: a) Let's imagine our planet with a full population of prokaryotes at the beginning of life (let's give OOL for granted, in this scenario). How many? I found a recent estimate for the total number of prokaryotes on earth today: 5*10^30. So, let's take it as good. b) Now, let's consider the age of our planet compatible with the existence of life. I will take it as 3.8 * 10^9 years. c) We will assume a mutation rate of 10^-8 mutations per base pair per generation in prokaryotes. d) We will assume a mean number of generations per year of 26208, considering a mean duplication time in prokaryotes of 20 minutes. e) We will reason for a mean protein gene of 150 AAs, that is 450 base pairs. Considering all those factors, a mean protein gene of 450 nucleotides would undergo about 2 * 10^39 mutational events in the whole time of earth's existence, in the whole bacterial population of our planet. It's about 130 bits of probabilistic resources. Therefore, I consider 150 bits to be a very realistic probabilist threshold for a mean protein gene. IOWs, a protein domain of about that length, and with a dFSCI of 150 bits or more, can safely be considered beyond the reach of a purely random biological system. That corresponds approximately to a sequence of about 35 strictly conserved AAs. I believe I am still being very generous. The empirical threshold shown by Behe in his book is at 2-3 AAs. In practice, we can concede something more. Let's say 5-6. So, we have a theoretical threshold of 35 AAs, and an empirical one of 6. That is more or less the realistic scenario that we should use to evaluate any possible functional variation that can be generated in a purely random biological system. Most basic protein domains tested by Durston are well beyond even the theoretical threshold: out of 35 protein families he analyzed, 28 are well beyond the 150 bits threshold. Indeed, 11 are beyond the 500 bits threshold, and 6 are beyond the 1000 bits threshold, with a maximun value of 2416 fits for Flu PB2. That is what we have to deal with in the proteome. In practice, we can safely assume that all proteins longer than 100 AAs are well beyond my theorical threshold of 150 bits. And, obviously, as the smallest fits value found by Durston is 46 fits (for ankyrin, a 33 AAs protein), practically all basic protein domains can safely be considered beyond the empirical threshold of 6 AAs (30 bits). QED. gpuccio

Leave a Reply