Uncommon Descent Serving The Intelligent Design Community

ID Foundations, 11: Borel’s Infinite Monkeys analysis and the significance of the log reduced Chi metric, Chi_500 = I*S – 500

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

 (Series)

Emile Borel, 1932

Emile Borel (1871 – 1956) was a distinguished French Mathematician who — a son of a Minister — came from France’s Protestant minority, and he was a founder of measure theory in mathematics. He was also a significant contributor to modern probability theory,  and so Knobloch observed of his approach, that:

>>Borel published more than fifty papers between 1905 and 1950 on the calculus of probability. They were mainly motivated or influenced by Poincaré, Bertrand, Reichenbach, and Keynes. However, he took for the most part an opposed view because of his realistic attitude toward mathematics. He stressed the important and practical value of probability theory. He emphasized the applications to the different sociological, biological, physical, and mathematical sciences. He preferred to elucidate these applications instead of looking for an axiomatization of probability theory. Its essential peculiarities were for him unpredictability, indeterminism, and discontinuity. Nevertheless, he was interested in a clarification of the probability concept. [Emile Borel as a probabilist, in The probabilist revolution Vol 1 (Cambridge Mass., 1987), 215-233. Cited, Mac Tutor History of Mathematics Archive, Borel Biography.]>>

Among other things, he is credited as the worker who introduced a serious mathematical analysis of the so-called Infinite Monkeys theorem (just a moment).

So, it is unsurprising that Abel, in his recent universal plausibility metric paper, observed  that:

Emile Borel’s limit of cosmic probabilistic resources [c. 1913?] was only 1050 [[23] (pg. 28-30)]. Borel based this probability bound in part on the product of the number of observable stars (109) times the number of possible human observations that could be made on those stars (1020).

This of course, is now a bit expanded, since the breakthroughs in astronomy occasioned by the Mt Wilson 100-inch telescope under Hubble in the 1920’s. However,  it does underscore how centrally important the issue of available resources is, to render a given — logically and physically strictly possible but utterly improbable — potential chance- based event reasonably observable.

We may therefore now introduce Wikipedia as a hostile witness, testifying against known ideological interest, in its article on the Infinite Monkeys theorem:

In one of the forms in which probabilists now know this theorem, with its “dactylographic” [i.e., typewriting] monkeys (French: singes dactylographes; the French word singe covers both the monkeys and the apes), appeared in Émile Borel‘s 1913 article “Mécanique Statistique et Irréversibilité” (Statistical mechanics and irreversibility),[3] and in his book “Le Hasard” in 1914. His “monkeys” are not actual monkeys; rather, they are a metaphor for an imaginary way to produce a large, random sequence of letters. Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

The physicist Arthur Eddington drew on Borel’s image further in The Nature of the Physical World (1928), writing:

If I let my fingers wander idly over the keys of a typewriter it might happen that my screed made an intelligible sentence. If an army of monkeys were strumming on typewriters they might write all the books in the British Museum. The chance of their doing so is decidedly more favourable than the chance of the molecules returning to one half of the vessel.[4]

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Let us emphasise that last part, as it is so easy to overlook in the heat of the ongoing debates over origins and the significance of the idea that we can infer to design on noticing certain empirical signs:

These images invite the reader to consider the incredible improbability of a large but finite number of monkeys working for a large but finite amount of time producing a significant work, and compare this with the even greater improbability of certain physical events. Any physical process that is even less likely than such monkeys’ success is effectively impossible, and it may safely be said that such a process will never happen.

Why is that?

Because of the nature of sampling from a large space of possible configurations. That is, we face a needle-in-the-haystack challenge.

For, there are only so many resources available in a realistic situation, and only so many observations can therefore be actualised in the time available. As a result, if one is confined to a blind probabilistic, random search process, s/he will soon enough run into the issue that:

a: IF a narrow and atypical set of possible outcomes T, that

b: may be described by some definite specification Z (that does not boil down to listing the set T or the like), and

c: which comprise a set of possibilities E1, E2, . . . En, from

d: a much larger set of possible outcomes, W, THEN:

e: IF, further, we do see some Ei from T, THEN also

f: Ei is not plausibly a chance occurrence.

The reason for this is not hard to spot: when a sufficiently small, chance based, blind sample is taken from a set of possibilities, W — a configuration space,  the likeliest outcome is that what is typical of the bulk of the possibilities will be chosen, not what is atypical.  And, this is the foundation-stone of the statistical form of the second law of thermodynamics.

Hence, Borel’s remark as summarised by Wikipedia:

Borel said that if a million monkeys typed ten hours a day, it was extremely unlikely that their output would exactly equal all the books of the richest libraries of the world; and yet, in comparison, it was even more unlikely that the laws of statistical mechanics would ever be violated, even briefly.

In recent months, here at UD, we have described this in terms of searching for a needle in a vast haystack [corrective u/d follows]:

let us work back from how it takes ~ 10^30 Planck time states for the fastest chemical reactions, and use this as a yardstick, i.e. in 10^17 s, our solar system’s 10^57 atoms would undergo ~ 10^87 “chemical time” states, about as fast as anything involving atoms could happen. That is 1 in 10^63 of 10^150. So, let’s do an illustrative haystack calculation:

 Let us take a straw as weighing about a gram and having comparable density to water, so that a haystack weighing 10^63 g [= 10^57 tonnes] would take up as many cubic metres. The stack, assuming a cubical shape, would be 10^19 m across. Now, 1 light year = 9.46 * 10^15 m, or about 1/1,000 of that distance across. If we were to superpose such a notional 1,000 light years on the side haystack on the zone of space centred on the sun, and leave in all stars, planets, comets, rocks, etc, and take a random sample equal in size to one straw, by absolutely overwhelming odds, we would get straw, not star or planet etc. That is, such a sample would be overwhelmingly likely to reflect the bulk of the distribution, not special, isolated zones in it.

With this in mind, we may now look at the Dembski Chi metric, and reduce it to a simpler, more practically applicable form:

m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify:

χ = – log2[10^120 ·ϕS(T)·P(T|H)]. χ is “chi” and ϕ is “phi”

n:  To simplify and build a more “practical” mathematical model, we note that information theory researchers Shannon and Hartley showed us how to measure information by changing probability into a log measure that allows pieces of information to add up naturally: Ip = – log p, in bits if the base is 2. (That is where the now familiar unit, the bit, comes from.)

o: So, since 10^120 ~ 2^398, we may do some algebra as log(p*q*r) = log(p) + log(q ) + log(r) and log(1/p) = – log (p):

Chi = – log2(2^398 * D2 * p), in bits

Chi = Ip – (398 + K2), where log2 (D2 ) = K2

p: But since 398 + K2 tends to at most 500 bits on the gamut of our solar system [our practical universe, for chemical interactions! (if you want , 1,000 bits would be a limit for the observable cosmos)] and

q: as we can define a dummy variable for specificity, S, where S = 1 or 0 according as the observed configuration, E, is on objective analysis specific to a narrow and independently describable zone of interest, T:

Chi_500 =  Ip*S – 500, in bits beyond a “complex enough” threshold

(If S = 0, Chi = – 500, and, if Ip is less than 500 bits, Chi will be negative even if S is positive. E.g.: A string of 501 coins tossed at random will have S = 0, but if the coins are arranged to spell out a message in English using the ASCII code [[notice independent specification of a narrow zone of possible configurations, T], Chi will — unsurprisingly — be positive.)

r: So, we have some reason to suggest that if something, E, is based on specific information describable in a way that does not just quote E and requires at least 500 specific bits to store the specific information, then the most reasonable explanation for the cause of E is that it was intelligently designed. (For instance, no-one would dream of asserting seriously that the English text of this post is a matter of chance occurrence giving rise to a lucky configuration, a point that was well-understood by that Bible-thumping redneck fundy — NOT! — Cicero in 50 BC.)

s: The metric may be directly applied to biological cases:

t: Using Durston’s Fits values — functionally specific bits — from his Table 1, to quantify I, so also  accepting functionality on specific sequences as showing specificity giving S = 1, we may apply the simplified Chi_500 metric of bits beyond the threshold:

RecA: 242 AA, 832 fits, Chi: 332 bits beyond

SecY: 342 AA, 688 fits, Chi: 188 bits beyond

Corona S2: 445 AA, 1285 fits, Chi: 785 bits beyond

u: And, this raises the controversial question that biological examples such as DNA — which in a living cell is much more complex than 500 bits — may be designed to carry out particular functions in the cell and the wider organism.

v: Therefore, we have at least one possible general empirical sign of intelligent design, namely: functionally specific, complex organisation and associated information [[FSCO/I] .

But, but, but . . . isn’t “natural selection” precisely NOT a chance based process, so doesn’t the ability to reproduce in environments and adapt to new niches then dominate the population make nonsense of such a calculation?

NO.

Why is that?

Because of the actual claimed source of variation (which is often masked by the emphasis on “selection”) and the scope of innovations required to originate functionally effective body plans, as opposed to varying same — starting with the very first one, i.e. Origin of Life, OOL.

But that’s Hoyle’s fallacy!

Advice: when you go up against a Nobel-equivalent prize-holder, whose field requires expertise in mathematics and thermodynamics, one would be well advised to examine carefully the underpinnings of what is being said, not just the rhetorical flourish about tornadoes in junkyards in Seattle assembling 747 Jumbo Jets.

More specifically, the key concept of Darwinian evolution [we need not detain ourselves too much on debates over mutations as the way variations manifest themselves], is that:

CHANCE VARIATION (CV) + NATURAL “SELECTION” (NS) –> DESCENT WITH (UNLIMITED) MODIFICATION (DWM), i.e. “EVOLUTION.”

CV + NS –> DWM, aka Evolution

If we look at NS, this boils down to differential reproductive success in environments leading to elimination of the relatively unfit.

That is, NS is a culling-out process, a subtract-er of information, not the claimed source of information.

That leaves only CV, i.e. blind chance, manifested in various ways. (And of course, in anticipation of some of the usual side-tracks, we must note that the Darwinian view, as modified though the genetic mutations concept and population genetics to describe how population fractions shift, is the dominant view in the field.)

There are of course some empirical cases in point, but in all these cases, what is observed is fairly minor variations within a given body plan, not the relevant issue: the spontaneous emergence of such a complex, functionally specific and tightly integrated body plan, which must be viable from the zygote on up.

To cover that gap, we have a well-known metaphorical image — an analogy, the Darwinian Tree of Life. This boils down to implying that there is a vast contiguous continent of functionally possible variations of life forms, so that we may see a smooth incremental development across that vast fitness landscape, once we had an original life form capable of self-replication.

What is the evidence for that?

Actually, nil.

The fossil record, the only direct empirical evidence of the remote past, is notoriously that of sudden appearances of novel forms, stasis (with some variability within the form obviously), and disappearance and/or continuation into the modern world.

If by contrast the tree of life framework were the observed reality, we would see a fossil record DOMINATED by transitional forms, not the few strained examples that are so often triumphalistically presented in textbooks and museums.

Similarly, it is notorious that fairly minor variations in the embryological development process are easily fatal. No surprise, if we have a highly complex, deeply interwoven interactive system, chance disturbances are overwhelmingly going to be disruptive.

Likewise, complex, functionally specific hardware is not designed and developed by small, chance based functional increments to an existing simple form.

Hoyle’s challenge of overwhelming improbability does not begin with the assembly of a Jumbo jet by chance, it begins with the assembly of say an indicating instrument on its cockpit instrument panel.

The D’Arsonval galvanometer movement commonly used in indicating instruments; an adaptation of a motor, that runs against a spiral spring (to give proportionality of deflection to input current across the magnetic field) which has an attached needle moving across a scale. Such an instrument, historically, was often adapted for measuring all sorts of quantities on a panel.

(Indeed, it would be utterly unlikely for a large box of mixed nuts and bolts, to by chance shaking, bring together matching nut and bolt and screw them together tightly; the first step to assembling the instrument by chance.)

Further to this, It would be bad enough to try to get together the text strings for a Hello World program (let’s leave off the implementing machinery and software that make it work) by chance. To then incrementally create an operating system from it, each small step along the way being functional, would be a bizarrely operationally impossible super-task.

So, the real challenge is that those who have put forth the tree of life, continent of function type approach, have got to show, empirically that their step by step path up the slopes of Mt Improbable, are empirically observable, at least in reasonable model cases. And, they need to show that in effect chance variations on a Hello World will lead, within reasonable plausibility, to such a stepwise development that transforms the Hello World into something fundamentally different.

In short, we have excellent reason to infer that — absent empirical demonstration otherwise — complex specifically functional integrated complex organisation arises in clusters that are atypical of the general run of the vastly larger set of physically possible configurations of components. And, the strongest pointer that this is plainly  so for life forms as well, is the detailed, complex, step by step information controlled nature of the processes in the cell that use information stored in DNA to make proteins.  Let’s call Wiki as a hostile witness again, courtesy two key diagrams:

I: Overview:

The step-by-step process of protein synthesis, controlled by the digital (= discrete state) information stored in DNA

II: Focusing on the Ribosome in action for protein synthesis:

The Ribosome, assembling a protein step by step based on the instructions in the mRNA “control tape” (the AA chain is then folded and put to work)

Clay animation video [added Dec 4]:

[youtube OEJ0GWAoSYY]

More detailed animation [added Dec 4]:

[vimeo 31830891]

This sort of elaborate, tightly controlled, instruction based step by step process is itself a strong sign that this sort of outcome is unlikely by chance variations.

(And, attempts to deny the obvious, that we are looking at digital information at work in algorithmic, step by step processes, is itself a sign that there is a controlling a priori at work that must lock out the very evidence before our eyes to succeed. The above is not intended to persuade such, they are plainly not open to evidence, so we can only note how their position reduces to patent absurdity in the face of evidence and move on.)

But, isn’t the insertion of a dummy variable S into the Chi_500 metric little more than question-begging?

Again, NO.

Let us consider a simple form of the per-aspect explanatory filter approach:

The per aspect design inference explanatory filter

 

You will observe two key decision nodes,  where the first default is that the aspect of the object, phenomenon or process being studied, is rooted in a natural, lawlike regularity that under similar conditions will produce similar outcomes, i.e there is a reliable law of nature at work, leading to low contingency of outcomes.  A dropped, heavy object near earth’s surface will reliably fall at g initial acceleration, 9.8 m/s2.  That lawlike behaviour with low contingency can be empirically investigated and would eliminate design as a reasonable explanation.

Second, we see some situations where there is a high degree of contingency of possible outcomes under initial circumstances.  This is the more interesting case, and in our experience has two candidate mechanisms: chance, or choice. The default for S under these circumstances, is 0. That is, the presumption is that chance is an adequate explanation, unless there is a good — empirical and/or analytical — reason to think otherwise.  In short, on investigation of the dynamics of volcanoes and our experience with them, rooted in direct observations, the complexity of a Mt Pinatubo is explained partly on natural laws and chance variations, there is no need to infer to choice to explain its structure.

But, if the observed configurations of highly contingent elements were from a narrow and atypical zone T not credibly reachable based on the search resources available, then we would be objectively warranted to infer to choice. For instance, a chance based text string of length equal to this post, would  overwhelmingly be gibberish, so we are entitled to note the functional specificity at work in the post, and assign S = 1 here.

So, the dummy variable S is not a matter of question-begging, never mind the usual dismissive talking points.

I is of course an information measure based on standard approaches, through the sort of probabilistic calculations Hartley and Shannon used, or by a direct observation of the state-structure of a system [e.g. on/off switches naturally encode one bit each].

And, where an entity is not a direct information storing object, we may reduce it to a mesh of nodes and arcs, then investigate how much variation can be allowed and still retain adequate function, i.e. a key and lock can be reduced to a bit measure of implied information, and a sculpture like at Mt Rushmore can similarly be analysed, given the specificity of portraiture.

The 500 is a threshold, related to the limits of the search resources of our solar system, and if we want more, we can easily move up to the 1,000 bit threshold for our observed cosmos.

On needle in a haystack grounds, or monkeys strumming at the keyboards grounds, if we are dealing with functionally specific, complex information beyond these thresholds, the best explanation for seeing such is design.

And, that is abundantly verified by the contents of say the Library of Congress (26 million works) or the Internet, or the product across time of the Computer programming industry.

But, what about Genetic Algorithms etc, don’t they prove that such FSCI can come about by cumulative progress based on trial and error rewarded by success?

Not really.

As a rule, such are about generalised hill-climbing within islands of function characterised by intelligently designed fitness functions with well-behaved trends and controlled variation within equally intelligently designed search algorithms. They start within a target Zone T, by design, and proceed to adapt incrementally based on built in designed algorithms.

If such a GA were to emerge from a Hello World by incremental chance variations that worked as programs in their own right every step of the way, that would be a different story, but for excellent reason we can safely include GAs in the set of cases where FSCI comes about by choice, not chance.

So, we can see what the Chi_500 expression means, and how it is a reasonable and empirically supported tool for measuring complex specified information, especially where the specification is functionally based.

And, we can see the basis for what it is doing, and why one is justified to use it, despite many commonly encountered objections. END

________

F/N, Jan 22: In response to a renewed controversy tangential to another blog thread, I have redirected discussion here. As a point of reference for background information, I append a clip from the thread:

. . . [If you wish to find] basic background on info theory and similar background from serious sources, then go to the linked thread . . . And BTW, Shannon’s original 1948 paper is still a good early stop-off on this. I just did a web search and see it is surprisingly hard to get a good simple free online 101 on info theory for the non mathematically sophisticated; to my astonishment the section A of my always linked note clipped from above is by comparison a fairly useful first intro. I like this intro at the next level here, this is similar, this is nice and short while introducing notation, this is a short book in effect, this is a longer one, and I suggest the Marks lecture on evo informatics here as a useful contextualisation. Qualitative outline here. I note as well Perry Marshall’s related exchange here, to save going over long since adequately answered talking points, such as asserting that DNA in the context of genes is not coded information expressed in a string of 4-state per position G/C/A/T monomers. The one good thing is, I found the Jaynes 1957 paper online, now added to my vault, no cloud without a silver lining.

If you are genuinely puzzled on practical heuristics, I suggest a look at the geoglyphs example already linked. This genetic discussion may help on the basic ideas, but of course the issues Durston et al raised in 2007 are not delved on.

(I must note that an industry-full of complex praxis is going to be hard to reduce to an in a nutshell. However, we are quite familiar with information at work, and how we routinely measure it as in say the familiar: “this Word file is 235 k bytes.” That such a file is exceedingly functionally specific can be seen by the experiment of opening one up in an inspection package that will access raw text symbols for the file. A lot of it will look like repetitive nonsense, but if you clip off such, sometimes just one header character, the file will be corrupted and will not open as a Word file. When we have a great many parts that must be right and in the right pattern for something to work in a given context like this, we are dealing with functionally specific, complex organisation and associated information, FSCO/I for short.

The point of the main post above is that once we have this, and are past 500 bits or 1000 bits, it is not credible that such can arise by blind chance and mechanical necessity. But of course, intelligence routinely produces such, like comments in this thread. Objectors can answer all of this quite simply, by producing a case where such chance and necessity — without intelligent action by the back door — produces such FSCO/I. If they could do this, the heart would be cut out of design theory. But, year after year, thread after thread, here and elsewhere, this simple challenge is not being met. Borel, as discussed above, points out the basic reason why.

Comments
eigenstate, Where you are involved most likely the far side, not the dark side. Function and meaning are things that are observed. Complexity can be described by mathematics, but function and meaning not so and it isn't even necessary.Joe
January 23, 2012
January
01
Jan
23
23
2012
08:15 AM
8
08
15
AM
PDT
I have another metaphor for the kind of search. I have a Roomba, a robot vacuum cleaner. It has a severely limited range of behavior: go forward until an object is hit, change direction, go forward again. There a few non-random modes for changing direction, but none of them involve knowing anything about the landscape. Nevertheless, it covers the landscape in a reasonable time.Petrushka
January 23, 2012
January
01
Jan
23
23
2012
08:12 AM
8
08
12
AM
PDT
@Joe, Welcome to the dark side, Joe! That's a question for gpuccio. Function is a conceptual cousin of meaning, and while both may be describable quantitatively in principle, in practice we are currently unable. Your objection is the foundation of the critic's rejection of FCSI and dFSCI -- way to go. Critics don't get the leeway to speak candidly in the way you get to, given your privileged status as a pro-ID member here, but occasionally it works out that you make points that fold back on ID, and so your use of "pathetic" gets applied in ways lowly critics couldn't apply. If we can agree that such a request is a non-starter, then we are really getting somewhere. The trouble is, there are ID advocates here who claim that it can be done and is regularly done and in a straightforward matter, nevermind the conspicuous absence of working examples. If you think that's a fool's errand, you should take it up with them. Their apologetics depend on that claim.eigenstate
January 23, 2012
January
01
Jan
23
23
2012
08:10 AM
8
08
10
AM
PDT
Math for function? Talk about pathetic requests. is there math for meaning?Joe
January 23, 2012
January
01
Jan
23
23
2012
08:02 AM
8
08
02
AM
PDT
"Tactile search" is excellent :) Thanks! For many reasons, not least being the likelihood that the ability to do more than tactile searches (as plants do), but detect things (predators; prey) at at distance (using reflected light, chemical signals, vibrations, aka sight, smell and hearing) drove the evolution of intelligence itself (the possibility of forward modelling, and therefore of intentional behaviour). Evolution is as smart as a plant :) Quite smart, in fact. But not as smart as an animal.Elizabeth Liddle
January 23, 2012
January
01
Jan
23
23
2012
07:59 AM
7
07
59
AM
PDT
Qualitative or quantitative? I was saying that he didn't seem to be proposing a quantitative measure. I think kf is, though. I agree re inputs. That seems to me the glaring flaw in Dembski's formula. It's only as good as the assumptions behind the inputs, and those are precisely what are at issue.Elizabeth Liddle
January 23, 2012
January
01
Jan
23
23
2012
07:55 AM
7
07
55
AM
PDT
The phrase blind search is not and never has been appropriate. Evolution always tests the edges of what already works. If you need a metaphor, it is more of a tactile search, feeling around. It's true that it doesn't see ahead, but it isn't testing the entire space, just what's in the vicinity .ppPetrushka
January 23, 2012
January
01
Jan
23
23
2012
07:48 AM
7
07
48
AM
PDT
@Elizabeth, I think gpuccio would say he IS advancing a qualitative measure, but on whose measurement must be preceded by human recognition to be bootstrapped. His most recent message to me suggests he really only wants to look at dFSCI, for practical reasons ("digital" implies "already quantified" at some level -- bits). That said, I'm in the midst of taking up the "quantativity" of his "functional complexity" concept, and am currently thinking it's either a) not quantitative in an end-to-end problem (can't be resolved against real inputs in a way to produce discrete outputs), or b) it's quantitative, but he's confusing it with simple information entropy (the bitwise complexity of the string). b) is the leading contender as to his view, right now, best I can judge, but as I said we are hashing that out right now. He may provide some good clarity on this. But that is the key problem, the reason why information theorists who've tried (and many have) to develop quantitative models of functionality, or meta-systematization have failed: the mapping of inputs into the model is incomputable presently. Whether that's because it's incomputable in principle, or it's just insuperably complex given the current state of our knowledge and tools isn't clear, but that gnarly problem makes it no surprise that gpuccio or any other IDer here is somewhat thin on providing concrete examples of quantitative analysis of function. As for kf, he won't do any math for his examples driven from the inputs, so that's that. I asked him about doing the math for function in his recent geoglyphs, but don't hold your breath.eigenstate
January 23, 2012
January
01
Jan
23
23
2012
07:47 AM
7
07
47
AM
PDT
GP: Again well put. I note you are giving a simple first pass measure, that is going to be good enough for cases where things can be taken as flat random, of course in more complex cases adjustments can be made. the approach you have given is effectively what was put on teh table as a start point maybe 6 years ago. The usual rhetorical game is to complain against a flat random assumption, to which the best answer is, first, that is exactly how we will approach say basic statistical thermodynamics or probability absent any reason to conclude bias. That is why a six on a die is odds 1 in 6. If we have reason to think something is jiggering the odds a bit, then we make an adjustment. For instance, real codes typically will not have flat random symbol distributions, but that is taken up in the shift to H as a measure of average information per symbol. This of course means that less information is passed than a flat random distribution "code" would do. Down t5hat road, we go to say Durston et al, and their table of 35 protein families, based on OBSERVED AA frequencies in aligned sequences. Thus, their FITS -- functional bits -- metric has in hand empirical evidence on distributions. You will see that when I gave three biological examples of the Chi-500 log reduced chi metric, I used Durston's published fits values. It is in that context that I then looked at the 500 bit threshold and of course the families were well beyond it. (Remember, for living forms the issue is really going to be the SET of proteins, and of course again that set is well beyond 500 bits.) My own twist is that we can also do a direct estimation of information carrying capacity, where we have multi-state storage elements, e.g. the 4-state elements in DNA that have a raw info capacity of 2 bits per place, just the actual code throws away some of that through the actual distribution. Not enough to make a difference, of course. Similarly, proteins are 20- state by and large, so 4.32 bits/element raw, but again we see a bit of a takeaway in protein families etc. But, not enough to take away from teh basic point. When it comes to things that do not have obvious string type structures, things get a little more complicated as you point out. For these, we can simply follow what is done in CAD, and go for the nodes and arcs framework that captures the relevant topology. We may also need to specify parts and their orientation at nodes. All of that can be reduced to structured data arrays, and we may then have an equivalwent set of strings, that gives us the implied information. We can then see if and how random perturbation to a modest degree affects performance. Unsurprisingly, we soon see that the pattern of functional specificity to relatively narrow zones T in the set of possibilities W, is a general pattern of things based on integrated function of enough components to run past 500 bits. That starts with simple cases like text in English, it goes on to things like computer programs, it goes on to cases like proteins -- how many slight alterations are at once non-functional! For a fishing reel, a bit of sand can make it grind to a halt so quickly you would be astonished. And you would be amazed to see how sand can seemingly vanish and transport itself then rematerialise where you would think it could never get to: I have come to believe in the near-miraculous powers of sand. Especially if you get caught in a wave. [Van Staals are SEALED like a watch, that's how the boys at Montauk swim out with them in wet suits, to rocks.] In short the case of isolated islands of function in seas of non function is a commonplace reality. That then leads to the next point, on what happens with blind searches of config spaces. Once we are looking at blind samples of very large spaces W, at or beyond 10^150 possibilities, where the available resources are locking us to 1 in 10^48 or worse, we have no right to expect to hit isolated zones T by blind luck. The only empirically known way to go to T's reliably is by intelligence, whether by programming a system or by whatever means conscious intelligences use to do things like compose and write coherent text or invent complex music or whatever. That is what the likes of ES seem ever so desperate to avoid by any means they deem necessary. GEM of TKIkairosfocus
January 23, 2012
January
01
Jan
23
23
2012
07:37 AM
7
07
37
AM
PDT
eigenstate. I see you have written much, so I will answer post by post. This is for post 12.2 I think you don't read what I write, or just don't want to understand what is really simple. well, I will state again in brief my definition (and procedure) for dFSCI: a) We observe a digital sequence. b) If we recognize a function for that sequence, we define it and the way to measure it. c) We try to measure the search space Usually approximated to the combinatiorial complexity of a random string of that length in that alphabet); then we try to measure, or approximate, the functional space (the number of sequences of that length that implement the function. d) We divide the numerosity of the functional space by the numerosity of the search space. That is the probability of fincding a functional sequence in the search space by a random search. e) dFSCI in bits is -log2 of the result. We are just expressing that probability as bits of functional complexity. f) We take into account any known algorithm that can generate the sequence in a non random way, and if there is some, we modify the computation including the modifications implied by the necessity algorithm to the random search. You say: So, clearly, there is some quality of “complex function” you are describing which is NOT the combination (or sum) of functionality and complexity. I searched UD a bit to see if I could locate some treatment on this from you, but could not. How can you say that? I have repeated a lot of time that a complex function is simpèly a function that is highly unlikely to emerge in a random way. The fubnctional complexity is the complexity of the function, not of a singular string that expresses the function. It is the -log2 of the ration of all the strings that express the function to all the possible strings. Therefore, in you example of a random string: a) The string is complex b) It expresses a function (possible use for cryptation) c) The function is simple (almost all the strings in the search space express it. It is simple. Why don't you understand it? I will not discuss the geopglyph becasue I prefer to discuss only digital sequences (that's the reason for the "d" in dFSCI). It's simpler. Therefore, the problem of how to measure the bits is already solved, in the way I have shown. Consider your reaction accelerator function. If we take that as our test, “bits” can only make sense as a quantitative measure of the phase space the function is enclosed by. But that phase space is NOT captured by casually “estimating” how many “yes/no” decisions (i.e. bits) are embodied in the function. Bits are not the currency of function. We might use bits as a means of simulating the mechanics of a particular function via finite automata, and then count how many bits we would need to implement a software program that models these mechanics. In that case, though, you are only measuring bits of a software DERIVATIVE, not the functionality itself, measuring the size of a mimimalist program that emulates that function. What do you mean here? A protein, or a protein gene, are digital sequences. Their complexity can be measured in bits. The ration of functional sequences to possible sequences can be calculated, and expressed in bit. It is a measure of the functional complexity, because it measures how much the function constrains variation in the sequence. What is your problem? We need no software, and no derivation or emulation of any kind. As you are aware (I believe) there is no way to deterministically establish a program is the shortest possible program that will produce a given output in a given description language (cf. Kolmogorov ). We can only arrive our “best effort”, which isn’t a “true” measure of the required bits for some function, but a “best guess”, and even then, it’s not actually measuring function, but a software PROXY for some function. We just need to take into account known algorithms, so there is no problem here. If a compression algorithm is known and empirically testable, we take it into account. Otherwise, we don't. This is a common objection, and has no empirical value. The reason that’s so hard is that anything you are likely to name as a function is going to be millions of bits of functional complexity, even for mundane, simple, known-to-be-perfectly-mechanical-non-designed functions. The degrees of freedom that are implicated in just your AA sequence that drives accelerated reactions is going to be ENORMOUS. For instance, if a functional protein is 120 AAs long, the maximal dFSCI for that function (assuming thatv only one sequence implements it) will be -log2 of 1:20^120, that is 518 bits. That is a big number, but perfectly computable. Very big, certainly. Big enough to make a random origin empirically impossible. But easy to compute. And your statement that: "anything you are likely to name as a function is going to be millions of bits of functional complexity, even for mundane, simple, known-to-be-perfectly-mechanical-non-designed functions" is simply false. Please, give examples of digital sequences with functional complexity (as defined) of "millions of bits", or even much less, that are non designed. I am waiting.gpuccio
January 23, 2012
January
01
Jan
23
23
2012
06:31 AM
6
06
31
AM
PDT
awesome :) I'm not sure that gpuccio is actually putting forward an a quantitive measure at this point. Have you taken a look at kf's? That seems to involve actual equations! However, I don't know how he is quantifying the inputs. Do you? And, kf, can you explain? I'm particularly interested in P(T|H). That seems to be the key term.Elizabeth Liddle
January 23, 2012
January
01
Jan
23
23
2012
02:15 AM
2
02
15
AM
PDT
@junkdnaforlife, Well, it won't be FCSI as used by ID advocates here. That's an acronym pretending to be an applicable numerical concept, a rhetorical device. A human child has a supercomputer that embarrasses the largest supercomputers we've ever built, and surpasses the largest cloud-based clusters we could assemble if we tied all the earth's computers together into one high-speed network computing mesh. 100million + neurons and 100 trillion + synapses, all running in n-to-n parallel topology and in firing in real time (and so densely packed in 3D space that communication times are a tiny fraction of what we would hope to manage with our amalgamated hardware). But our computing power and processing speed advances by the day. We've fallen off Moore's Law some time ago, but the advance is still very rapid. We're still figuring out the wiring and firing mechanics of spiking neural nets and related systems that provide such astonishing performance in recognition by the small child. I have the code faith, although as a guy in my forties, I may not live long enough to see that much growth in the computing power we can marshal, but who knows? Facial recognition doesn't demand the whole of the human brain's resources, and that's something advancing very quickly as our software infrastructure gets better and better in terms of neural network libraries and runtimes. OT: As an aside, I'm working on a first cut of a GPU-powered multi-layer perceptron (my first time developing for a GPU host directly). That's not an idea I came up with (using GPUMlib -- C++ and Cuda), but just initial 'Hello world" tests are amazing in terms of performance vs. a similar perceptron running on CPU (8 cores). The card is a fairly high end if a bit outdated GTX470, but wow, can that thing crunch. You know the parallelism is going to give it a big boost over the 8 cores, but when you see it run, it's quite dramatic. You could power a pretty sophisticated facial recognition program with just that card, I think (maybe it's already been done). My perceptron doesn't do image recognition, but drives motor controllers from input sensors in a [Bullet Physics] 3D environment which is running on the CPUs.eigenstate
January 23, 2012
January
01
Jan
23
23
2012
01:16 AM
1
01
16
AM
PDT
This is true, but I'm not at all sure how it supports gpuccio's case. It seems to go the other way to me (and is a point I've made myself before now - inanimate, presumed "unconscious". algorithms are perfectly capable of complex perceptual tasks. Interestingly many of them do it using internal evolutionary algorithms, i.e. learning algorithms, as it is likely our own brains do as well. But don't let me start another derail! What we want is for gpuccio to provide the objective algorithm that detects FCSI, but he seems to be arguing that there isn't one - you need a conscious being to do it. Correct me if I'm wrong gpuccio.Elizabeth Liddle
January 23, 2012
January
01
Jan
23
23
2012
01:11 AM
1
01
11
AM
PDT
F/N: Of Ribosomes, process sequence charts, spinning reels and the objective reality of FSCO/I, here. In Poker they would say: read and weep. EP, thank you ever so much. Timely! (And, oh yes, once we see string data structures, we can build up more complex arrays by use of pointers etc. This is how a PC -- memory is a stack of 8-bit strings as a rule -- works, and for instance, a CAD package file presents the "wiring diagram" structure of a system by doing just that. Each bit in such a system is a structurally organised Y/N question and its answer. The text in this post is a case in point too, each letter or alphanumeric glyph being seven Y/N answers, with another one used to parity check. If you want to see what that would imply for a three dimensional functionally specific and complex object, think about reducing the Abu Cardinal exploded view, Fig 6 the linked, to a more abstract nodes-arcs view, where the part number and orientation of each part are coded [think about how assembly line machines would have to be arranged to effect this on the ground!], and there is a data structure that codes the nods and arcs. The assembly process for the Cardinal will be similar to the process shown in fig 4, which is an assembly diagram for the protein chain assembly process, done by an industrial robotics practitioner. This is yet another level of nodes-arcs diagram, and the text in its nodes would be further answers to Y/N q's at 7 bits per character, constituting prescriptive coded information. These all run past 500 - 1,000 bits so fast that it is ridiculous. I wonder, can we still get something like the old Heathkits with their wonderful assembly manuals? maybe, that will help people to begin to understand what we are dealing with, those manuals were usually dozens of pages long, and richly illustrated. I still remember the joy of assembling my very own ET 3400 A [I modded the design by bedding the crystal in bluetac, cushioning it], a classic educational SBC which sits in one of our cupboards here 25 years later; a pity the volcanic fumes damaged the hardware while in storage.) KFkairosfocus
January 23, 2012
January
01
Jan
23
23
2012
01:10 AM
1
01
10
AM
PDT
@kf, Can you supply just one list of the yes/no questions you refer to, here? I understand bits as a binary state (e.g yes|no), but do not understand binary states or bits to be the units of functionality, or the complexity of functionality. Here again is an opportunity for some actual delivery on the applied concepts you talk about would go far, where your abstract references and generalities don't get you anywhere at all. How do you go about specifying a system? Take a brutally simple system, no need for more-than-universal-atoms, just a system which you think has a complexity of a handful of bits. That wouldn't take long (if you can do such at all), and would be EXTREMELY educational on this topic.eigenstate
January 23, 2012
January
01
Jan
23
23
2012
12:27 AM
12
12
27
AM
PDT
@gpuccio#12, I'm out of time for tonight in walking through your posts #10 and 12, but will proceed as I have time tomorrow and beyond. In re-reading your #12, though, I wanted to pull this out and comment on it before I go for now: I said:
If my goal is to secure access to my account and my password is limited to 32 characters, a random string I generate for the full 32 characters is the most the most efficient design possible.
To which you replied:
Yes, and any random string of 32 characters will do. Function, but not complex function.
So, clearly, there is some quality of "complex function" you are describing which is NOT the combination (or sum) of functionality and complexity. I searched UD a bit to see if I could locate some treatment on this from you, but could not. In any case, if you grant that my 32 char string is functional for our purposes, and you have, I was offering a random sequence of characters as the content of that string to give it maximum complexity. That is, maximum complexity per information metrics (I, H). I chose this specifically to "peg the needle" in terms of being "functional" and "maximally complex". But clearly, "bits" as an information theory metric is not what you are going for, even though you (curiously) say the answer is rendered in "bits" as your units. If you want to measure "functional complexity" you are either a) confused in responding to me as you have, as a random string has maximal complexity as the substance for the function, or b) you are confused in thinking that "bits" is a quantitative measure of functionality. Given what you've said above, now (repeatedly), it cannot be a), so you must be objecting on the basis of b). And this explains why you are neither able to quantify functionality, nor interested in doing so. It's not a matter of bits for you. A good way to assess this problem, as I keep pressing for, is to actually try to APPLY the concepts in a quantitative, rule based way, and test it. Consider your reaction accelerator function. If we take that as our test, "bits" can only make sense as a quantitative measure of the phase space the function is enclosed by. But that phase space is NOT captured by casually "estimating" how many "yes/no" decisions (i.e. bits) are embodied in the function. Bits are not the currency of function. We might use bits as a means of simulating the mechanics of a particular function via finite automata, and then count how many bits we would need to implement a software program that models these mechanics. In that case, though, you are only measuring bits of a software DERIVATIVE, not the functionality itself, measuring the size of a mimimalist program that emulates that function. As you are aware (I believe) there is no way to deterministically establish a program is the shortest possible program that will produce a given output in a given description language (cf. Kolmogorov ). We can only arrive our "best effort", which isn't a "true" measure of the required bits for some function, but a "best guess", and even then, it's not actually measuring function, but a software PROXY for some function. Without that in mind, you have case where, for example, KF supposes that the roundness of a geoglyph (an OCP per the terms above) surely comprises more than 300 bits of functional complexity, which is just an absurd framework for assessing that phenomenon. What are those "yes/no" questions that we have 300+ of in the case of the geoglyph? Oh, right, they aren't yes/no questions? That's why I asked! What do those "bits" represent, then? As far as I can tell, nothing. I'd be happy to have KF explain where and how he allocates those bits, but the point here is that thinking about "bits" as a measure of function in the SAME WAY ONE THINKS ABOUT INFO THEORY "bits" is a fail, unless you want to stipulate that functionality (er, functional complexity) is synonymous with information entropy. But that can't be, as you've shown, else a 32 character random string for a password would be "maximally functionally complex". So it's something else, something that's not been defined, or even engaged as far as I can tell on this forum. If you hired me as your technical consultant, I'd first ask for a big raise as hazard pay on this topic, then I'd pursue something like the above. Functional complexity as the number of bits required for a computer program that serves as a proxy for the actual phenomenon. That's still a way to get nowhere, probably, and likely laughed as we do, but it would AT LEAST be a heuristic that could provide some semantics for the metric, some grounding for what "bits" might measure in a serious approach to the concept. The reason that's so hard is that anything you are likely to name as a function is going to be millions of bits of functional complexity, even for mundane, simple, known-to-be-perfectly-mechanical-non-designed functions. The degrees of freedom that are implicated in just your AA sequence that drives accelerated reactions is going to be ENORMOUS. That's not a boon for ID, that's a problem. It reflects the information-is-physical/physical-is-information aspects of physics, which puts ANY functional specification WAY over the UPB, even for the most basic, banal function you could point to.eigenstate
January 23, 2012
January
01
Jan
23
23
2012
12:19 AM
12
12
19
AM
PDT
eigenstate: "It can only be accomplished by the “magic” of conscious beings." What about facial recognition algos vs human recognition? facial rec algos fail miserably (swamping false positives) vs human. a human child can easily target the match within seconds with very few false positives. eventually we will get there with facial rec algos, so I wouldn't curb stomp fcsi based on the current reliance on human consciousness to identify it just yet, have some code faithjunkdnaforlife
January 22, 2012
January
01
Jan
22
22
2012
11:49 PM
11
11
49
PM
PDT
F/N 2: It has apparently not soaked into ES, that the FSCI threshold is not a hard barrier, it is a matter of the implications of ever increasing scope of blind search. Beyond a certain point a challenge becomes insuperable, and it is most definitely not "question-begging" or the like to point out that the only known and routinely observed way to get past that hump is intelligence. In short, we have here a massively supported observation, and a needle in a haystack analysis that points to why it is so. To put it another way around, it is always possible to have "lucky noise" making a pile of rocks falling down a hillside come up trumps reading Welcome to Wales or the like. But, given the isolation of such a functional and contextually relevant config in the space of possibilities for rockfalls, if you see rocks spelling out "Welcome to Wales" on a hillside on the Welsh border, you are fully warranted to infer to intelligent design. For reasons that are not too hard to figure out, if one is willing to accept the POSSIBILITY of intelligence at the given place and time. (And we suddenly see the real rhetorical significance of all of that snide dismissal of "the supernatural," even when we see that from the very first days of the design movement in 1984, with Thaxton et al, it was highlighted that the empirical evidence for biology warrants inference to intelligence, not to identifying who the intelligence is, and/or whether it is within or beyond the cosmos. There is a side to the design inference that does point beyond our cosmos, cosmological, but we can observe a very studied, quiet tip-toeing by Hole's graveyard on that. Sir Fred's duppy, leaning on the fence and shaking his head, says: BOO! EEEEEEEEEEEEEEK! . . . ) KFkairosfocus
January 22, 2012
January
01
Jan
22
22
2012
10:45 PM
10
10
45
PM
PDT
F/N: Lotteries are designed to be winnable within the likely population of purchasers; i.e. without winners no-one would think they have a chance (even in the power-ball case where the idea is that the "growing un-won prize" will pull in more and more of the gullible). In short, anyone who makes a lottery that is the equivalent of a needle in a haystack search is incompetent, and we had better believe that the designers of such are anything but incompetent, this is a case of the dictum in finance, that if you are naive or thoughtless about ANYTHING, there will be someone out there more than willing to take your money. The problem GP identified with a search resources gap is real. And the proper comparison is not lotteries -- designed to be of winnable scope of search -- but needles in haystacks, or for code-bearing strings as we see with dNA or as is implied by protein AA chains, monkeys at keyboards. (This is a case of pointing out a real disanalogy, and a more correct analogy or two! Analogies that as it happens to be, include one that was actually developed in the context of scientific analysis where this sort of probability threshold challenge first emerged: statistical mechanics, as Borel reminds us.) The key issue is that once we have a string of 500 or more yes/no questions to specify the config of particular components to get something to work [relevant and obvious for string data structures such as DNA and AA chains, implied for node and arc breakdowns of complex organised objects like ATP synthase or the flagellum or kinesin walking trucks on the microtubule highways of the cell], we have swamped the atomic resources of our solar system -- our effective universe for chemical level interaction. Under those needle in haystack conditions, a small scope of blind search -- 1 in 10^48 or less -- is going to be so small that the only reasonable expected "find" will be straw, the bulk of the space of possibilities. That is, non-functional configs. Again, the only empirically warranted means of getting to FSCI is intelligent design. We are -- on a priori materialism -- being invited to ignore that fact, and discard basic inductive inference, then resort to speculating on how it "must" have been in the past against all odds. But the odds are trying to tell us something. Namely, if you see complex digital code that effects algorithms and data structures for complex functional processes, or complex assemblies that make motors and other machines, then just what is it that in your experience best explains it. Then too, on the scope of the observed cosmos, what sort of causal process is a credible explanation of what you are seeing, why? I am also beginning to think that we are dealing with a generation that is too hands off of technologies to have it in the bones as to just how complex and specific even something as mundane as a bolt and nut or a gear are, much less something like a computer or cell phone motherboard, or a motor. So, we see a willingness to allow simplistic simulations to mislead us -- I think here especially about that "voila poof" case of a clock self assembling itself our of gears that just happened to mesh right via a genetic algorithm. I don't think that the creators of such understood just how hard it is to centre a shaft and axle, or how gears have to be keyed or otherwise locked to the shaft, or just how complex gearing is, and what it takes to get gearing that meshes right, across a gear chain. Or the effects of thermal expansivity, or a lot of other things. There is a reason why electrical, electronic and mechanical designers and programmers are paid what they are paid. Just because, thanks to the magic of mass manufacture we can get their products cheaply, may be breeding that familiarity that leads to contempt. And, don't let me get into code interweaving, which is known to be the case in DNA. I never even bothered to try that, giving most fervent thanks instead that by my time we had cheap 2716 EPROMs that held all of 2 k bytes, and that we could use 74LS245 8-bit bidirectional bus interfaces and 2 k byte RAMs to build emulators! (Do you know how much control can fit into 1/2 - 1 k byte, once you are doing machine language programming? But, to do as much in 128 bytes, i.e. 1 k BIT, is a very different story! BTW, this is one reason why I think that the Cambridge initiative to put the Raspberry Pi out there as a US$ 25 - 35 PC on a credit card sized chip is so important, it re-opens the world of hands on electronics tinkering at a new level. That's why I am feeling my way to a new generation of intro tech courses, that build on PC hosts and target systems that interface to the real world. The mechatronics age is in front of us, but too often we are blind to it; not to mention, its implications.) So, I think the best advice I can give is, go get an el cheapo spinning reel or the like, and take it apart, seeing how it works, then put it together again. Think about what a worm gear or helically cut gearing are going to require, and what has to go into specifying the integrated components. And in case you wonder about the wobble of such a reel, then know what you are paying for when you go for reels that cost ten times as much.) KFkairosfocus
January 22, 2012
January
01
Jan
22
22
2012
10:32 PM
10
10
32
PM
PDT
That’s it. That’s exactly my point. “Function”, as you say, “defies objective formalization”. It’s perfectly true. Function is related to the conscious experience of puprose, and all conscious experience in essence “defy objective formalization”. That’s exactly the point I have discussed many times with Elizabeth, and that she vehemently denies.
Well, for the record, I vehemently deny it, too. I'm first just trying to understand your position, and ramifications of that position. So your saying "That's exactly my point" reflects a level of understand on my part about your position, which is great. But I see no reason at all why "function" would be impossible in principle to formalize and systematize in quantitative terms. If nothing else, such a position on my part (and thus I suggest on your part) would produce the response "function: ur doin it rong", perhaps with a lolcat peering at a flowchart as an attending image. I'm not sure how else to characterize "defies objective formalization" on your part except to understand it as some reflection of a romantic superstition. It doesn't matter how I understand its internals though, so much as it matters that I understand why you refuse to provide any formalization, or even quantitative rules fo what you refer to as a "metric". I get that now, thanks. I haven't read that exchange with Elizabeth, but maybe I'll profit from going back to find and read that.
And so? What shall we do? We build our science suppressing the concept of function, because it “defies objective formalization”?
Of course not. But neither should we be satisfied with defeatist superstitions and magical thinking. If you can describe something, rigorously, formally, you can "demystify it", and systematize it, and remove if from the realm of credulous intuitions about our own consciousness. Information theory -- the vanilla kind -- is a good example. We came quite close to the present day, historically, without a numerical model for channel-based information. It wasn't until Thomson developed a model for statistical mechanics (and this was in the late 19th century) that we had maths for information theory (Shannon discovered the same model in a different context several decades later). Before that, we thought we might "recognize" information, us magical humans, but we were ignorant in that respect; we did not understand the mechanics of entropy, disclosure and statistical analysis of symbol sets. The answer wasn't to resign ourselves to "recognizing information when we consciously recognize it", or to agree by consensus that some symbol set had "a lot" of information, and some other set "seems to have less". It was a hard problem, but it's one we've made progress on, and have systematized and formalized in ways that do NOT rely on human recognition. We can quantify Shannon's H with a computer program now, easily. As a negative example, by the way, see Werner Gitt's hapless attempts to define the "five levels of information", including "apobetics". As goofy as that guy's ideas are, though, they are at least laudable as an attempt at systemizing information in a way that captures intent, meaning and purpose, if a failed one. FSCI, insofar as it retreats from the challenge of systematizing and formalizing its concepts in quantitative terms, then, has a very bleak future. It's use is just an informal rhetorical tool, applicable for casual debates in settings like this one, so far as I can see. For my part, I don't suppose 'function' is magic, or intractable in principle. It's a hard, complex problem, and one we aren't ready to tackle directly, and must build the knowledge infrastructure for first, before we can expect major headway to be made. Similar to the way the infrastructure had to be developed over decades that enabled us to measure, watch, and manipulate neurons and synapses and related machinery in the human brain as a predicate for making headway on the hard problems of cognition. I know you didn't develop FSCI for me, or any critic, and are not the least disturbed by the critics' shrug, but as I understand it in light of what you've said here, FSCI is hard pressed to command more than a shrug -- meh. It's not an interesting tool that can take us anywhere on the key questions, scientifically.
We build our science suppressing the concept of “meanign”, because it “defies objective formalization”?
Science has an epistemology to protect and preserve, else it has no knowledge to offer at all. So, it's not a matter of suppression as *distaste* for the subject, but the demand that anything integrated into the epistemology doesn't NUKE that epistemology, as this would. A divine fit in the door, so to speak, threatening all the models that integrate it or acknowledge it. "meaning" is suppressed conceptually in precisely the same sense "information metrics" were suppressed conceptually prior to Boltzmann, Thomson, and Shannon and Chaitin, et al. It wasn't coherent enough to be integrated into scientific models, prior to that, so it's non grata for the very best of reasons. If "meaning" or "function" is similarly inchoate, and it is at this point in time, if we care about the integrity of our knowledge and models, it is similarly something we talk about, but shun and eschew as an ingredient in our science. It's still voodoo. It may not always be thus, and given the march of science, I expect it to be "demystified" at some point in the future in such a way that it can be actually implemented as a set of operating metrics for our models.
We build our sceince suppressing the concept of “objective formalization”, because it “defies objective formalization”? How do you define “formalization” without the concepts of meaning, of cause and effect, and many others, that require a cosncious being to be understood? Absolute nonsense!
Well, developing a formalized model of consciousness would be a very elegant means of doing that. I won't say that's the only means to do it, but I think the two are tightly related as physical phenomenon; to understand 'meaning' and 'function' in the anthropomorphic sense, is to understand consciousness to a significant depth, and vice versa. That's the great thing about eschewing those superstitions, though: the problems are hard, daunting, but there is no temptation to cop-out and satisfy oneself with lazy retreats toward appeals to a cosmic designer or a "supernatural mind", etc. It's all natural phenomena, exquisitely challenging natural phenomena, to model and reverse engineer.
eigenstate
January 22, 2012
January
01
Jan
22
22
2012
10:07 PM
10
10
07
PM
PDT
@gpuccio#10
This kind of reasoning is absolute nonsense, and it is really sad for me that intelligent persons like you and Elizabet go on defending nonsense. However, sad or not sad, I go on. The role of the conscious observer is very simple: there is a protein that has an objective function, but that function can only be described by a cosncious observer. Why? Because only conscious observers recognize and define function. An algorithm can perfectly recognize some specific function, aftfer a conscious observer has programmed, dorectly or indirectly, the properties of that particular function in the algorithm. But not before.
OK, this is good because it teases out a further aspect of the problem, here. Let's just stipulate for the moment that only conscious observers recognize 'function'. That's highly problematic in itself in my view, as if you can understand what enables recognition you don't necessarily need a conscious being to do that, unless you also want to stipulate that a non-biological machine which has trained neural nets (or some other mechanism) to recognize 'function' as well. But provisionally, let's understand that's the case: conscious humans are needed to recognize function initially. You allow (at least!) that we may define some of those functions in a rigorous, formal way such that we can implement algorithms that perform those recognitions, *after* they have been pioneered by conscious humans. Very well. If that's true (as we are stipulating), then necessarily there is no "meta-recognition" for functions, which is to say there is no way to reduce function recognition to an algorithm (or maybe it's better to call it 'automata' there, since in practice such recognitions would not be algorithmic in the strict sense, but rather machine-learned, through neural nets and the like). If there is no algorithm for meta-recognition of functions, in the way humans can meta-recognize, there necessarily CANNOT BE A WAY to apply an objective test for function. It can only be accomplished by the "magic" of conscious beings. That's problematic on its face, scientifically (how much science do you know that operates on "can't define it quantitively but I know it when I see it, trust me!"?), but the real kicker is that if all the above is true, "designedness" is not a discrete physical property that can be derived in an objective, systematic way. I am taking care to use "objective, systematic", in order to distinguish from just "objective" being taken to mean "not dependent on [subjective] mind or will". "Objective, systematic" is used to point to a true metric, a numerically-based, measurable and quantifiable set of attributes that inhere in an OCP itself. That's damning for ID, if it's true. The whole enterprise is based on the conjecture that "designedness" obtains in objectively determinable ways, no? And particularly as it pertains to OCPs? If yes, then your conditions make that impossible. ID can garner objective consensus on what objects it has observed empirically to be designed, but it can't weigh in on OCPs in any objective, systematic way. It can only hope that humans agree to "recognize" OCPs as intelligently designed or not to prevent further controversy. But so long as there is not unanimous recognition, ID is hopelessly impotent on these terms. It's no more than an "Is too, is not" back and forth fiasco, then. And that's a great and valuable feature of science, which ID foregoes in this case. "Let's let the numbers decide", and "the scores from the competing predictions should settle this one for us" remove "human recognition" to simply evaluating the scores and results, and this is awesomeness. If FSCI is fundamentally hitched to "I know it when I see it, based on what other things I've seen designed", ID has very humble ambitions, and prospects, indeed. But maybe you do suppose that recognition can be systematized, and functionality quantified? If so, then it's wide open in terms of what ID might hope to accomplish.eigenstate
January 22, 2012
January
01
Jan
22
22
2012
09:29 PM
9
09
29
PM
PDT
The point is, it is not important how we define dFSCI: the important thing is that it works. There are conceptual reasons why we define dFSCI as we do, but in the end they would beof no value it dFSCI did not work. It does work, and we can verify that on all the onbjects for which we have a reasonable certainty that they were designed or not. So, again, it is an empirical procedure. And it works.
How dFSCI is developed and determined is crucial. Being empirically consistent in terms of just "tagging" objects as designed does NOT, in itself, confer an epistemic basis for rendering judgment on objects of controversial provenance (like your example of biological information). If you can develop a metric which measures something *intrinsic* in the object, that is epistemologically valuable in assessing objects of controversial provenance (OCPs, here forward) in a way empirical tagging with "designed" on various object we can see being designed is not. That's because if your FSCI is intrinsic to the object, OCPs are quite possibly amenable to detection or measurement of this metric, and therefore controversial provenances might be adjudicated, and intelligently designed objects might be argued for reasonably where this intrinsic metric is determined. (Note that this means that for a given object, the provenance of the configuration and genesis of the object is unknown, and your FSCI -- whatever this other metric would be called, because it's not what you are describing -- would be determined just from examination and testing of the subject, without access to a design/non-design view of how it got that way). But as it is (correct me if I'm wrong), it's just an external assessment -- something we might keep in a list of "designed objects". Epistemically, when you are presented with an OCP, FSCI can't help you, as the list has no information about its provenance. The "metric" (and 'metric' here is really a misnomer as this becomes more clear) does not obtain from the object itself, but only from observing its provenance.
I am happy that the reaction acceleration is not a problem for you. So, we need conscious recognition to find something that is an objective physical process. Why? It is very simple. All science is based on consscious recognition of objective physical processes. Who do you think recognized gravity, and found the laws that explained what he had recognized? An algorithmic process? Have you ever heard of Newton?
No, it's not. When I write software (and while I've done this with genetic algorithms in this field, I'm not talking about that in this case, but "hand-designed" heuristics we code deliberately) that analyzes and detects network intrusion patterns, I define the criteria, and a huge set of machines monitors large streams of traffic in doing its analysis. That's not to say that humans aren't involved in the process -- I and my engineers wrote the code that makes this automation happen, but the surveying, classification, identification and testing all happens by machine. It's just very complicated rule sets being applied and Bayesian and other statistical matchings applied to the results to produce new knowledge, new empirically derived and tested information. And while it's true that humans engineer the process, science is ever working to find ways to suppress subjectivity and bias in its models, which makes instrumentation and automation a powerful asset -- machines and algorithms being the extension of human inquiries doing lots of legwork, and doing it as 'dumb machines', meaning they operate as they are designed to operate and do not color their operations with their religious beliefs of worldviews. Even so, we depend on recognizing processes, or more precisely observing processes that we connect with others that match in our experience. This is a limiting factor, though, for the very reason you struggle with applying FSCI -- it's not amenable to formalization. That's OK -- the chain has to end somewhere, else infinite regress -- but it's a "bug" not a "feature" to say "we depend on just recognizing". That's what we do because we must and have no other way. Where we can develop and apply formal, objective rules and algorithms as part of discrete model, a model that doesn't depend on humans "recognizing" except at the top level where we observe to performance of the model itself, we can "scale" as we would say in the software world. The model is robust, then, and dependable because it DOESN'T rely on the weakness of human recognition, and all the error and imprecision that goes with that. "Scale" is important there, because the problem is not just error and imprecision. If you know Shannon or Kolmogorov information metrics, you will understand that these metrics aren't just hugely valuable because they aren't subject to human recognition errors. They are valuable because without human recognition, the metrics are completely general, and the metric scales across domains elegantly. All you need is symbol streams and statistical ensembles to evaluate and it all works. (d)FSCI isn't just lacking because it's error prone and human bound (it thwarts robust, discrete models), but because it cannot be generalized and scaled. One can't even measure a contrived "textbook example", let alone point at any desired object of interest for algorithmic inspection. More in a bit. This is interesting and substantive stuff to critique, thanks.eigenstate
January 22, 2012
January
01
Jan
22
22
2012
08:56 PM
8
08
56
PM
PDT
@gpuccio,
Now, that’s obviously not fair. You are evading my comment. You are not discussing at all the lottery example, and shifting to the deck of cards. But my comment on the deck of cards were not those you quoted. So, please, answer my comment on the lottery example, ot just admit that uou were wring in making that argument. That would be the only fair behaviour possible.
A lottery is no different than a deck of cards - it's the same example! I don't know how many numbers one must pick for "Powerball", the big lottery in my area, as I don't play the lottery (Blackjack is even too steep in terms of odds against in my view!), but it's several numbers, like 6 separate numbers between 1 and 99. All a card deck is a number between 1 and 52 times 104 instances (without duplicates, which I believe distinguishes it from Powerball). If you want to talk about a lotter with a phase space for the tickets of 10^166 rather than a deck of 104 cards, I'm just with that. We are just dealing with segmented ensembles from a large phase space. Cards or tickets, the underlying maths and concepts don't change. In any case, I'm happy to stick with lottery tickets, if you prefer.
I would say this is your main argument. Probably the only argument. And it is completely wrong. First of all, you (and Elizabeth) are still not understanding my definition of dFSCI and my procedure. You still equivocate its nature, its purpose, and its power. dFSCI is an empirical concept. The resoning goes this way, in order
I think that's right, and said as much in that post -- that's the key point of objection to (d)FSCI. Thanks for recounting it again, let's see how this goes:
a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence.
Right. Got that, and saw (and see) that that is problematic in its own right. It's not a problem to grant what we observe directly to be designed by intelligence, it's that this is 99.9% of the substance of the question. And it's decided empirically, not algorithmically, and that's important, because it means you have observations of design, but have not captured or understood what, if anything, remains in the design(ed) that unamibiguously signals intelligent design.
b) We define dFSCI as such a property.
I don't see this as a "property" at all. It's an observed instance of design. "Property" is precisely what you DO NOT HAVE. You have things that are known to be designed, but you know that because you actually observed the design. You HAVE NOT identified any property intrinsic to the design(ed) that obtains independent of simply seeing the design process at work. If you had an actual property, you could look for it, and mechanically determine, without observing any contributing designer or design process, whether it was designed, predictably. I think this is the core of your confusion, conflating a property of the designed thing with your knowledge (external to the actual designed thing) of the design process that produced it. These are not the same thing, and are profoundly different aspects of the phenomena in question. A designed thing may not bear any features of design in the designed thing, AS PART OF THE DESIGN, for example. Even when there is no such goal in the design, you have not got a property of the object defined, here, but have instead put it into a list of things you know are designed. You can not tell us, obectively, what makes it designed APART from your observations of design.
c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives.
But so far, you don't have anything that is intrinsic to the object itself. You only have "was produced by a designer". That is NOT an equivalent pair of statements: a) This was produced by a designer, and b) this thing has the property of designedness. You appear to have confused a) and b) thoroughly. This is a HUGE problem for your argument if that's the case.
d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information).
That doesn't work at all. As you have it here (and I understand you may need to clarify or expand it more), it is utterly useless and impotent for that purpose. Once you don't know beforehand the provenance of the object of phenomenon you are looking at, (d)FSCI is of absolutely no value to you, or me, or anyone. It's probably worth less than nothing, because it apparently inclines the bearer to a false sense of knowledge, based on the confusion discussed above. Gotta get on an airplane, will continue anon, thanks.eigenstate
January 22, 2012
January
01
Jan
22
22
2012
08:21 PM
8
08
21
PM
PDT
Onlookers: Kindly, look at the sketch of a D'Arsonval galvanometer movement in the original post. think about how the components must be specifically selected and configured, it this device is to have a proportional response tot he current in it, being a specialised very low power motor restrained by a very special spiral spring. Think about a bag of the parts for such a meter, and shake it up. How long do you think you would have to shake before you would get a working instrument? Now, convert the 10^57 atoms of our solar system into bags with parts like that. Shake them up for 10^17s will you have reason to believe any one of them will form a working galvanometer? Now, consider the self replicating facility required for a cell to work, in light of the requisites of a von Neumann Kinematic self-replicator, which is what the cell implements. Notice, this is far more complex than a galvanometer. Do you see why, conveniently, origin of life is locked out of the discussions by the darwinists? But, until you have a metabolising, self replicating entity with coded stored prescriptive info to build a fresh unit, you do not have life-function. The evidence of actual cells is that this credibly requires north of 100 k bits of info, 100 times as many bits as will exhaust the capacity of the observed cosmos to search. Every bit beyond 1000 DOUBLES the number of possibilities. To get to novel body plans or novel organs like the avian lung or wing, we routinely will be dealing with millions to hundreds of millions of bits of further information, based on genome sizes. Think about how easy it is to be fatally ill or deformed if the parts are relatively slightly wrong. In short, the issue of isolated islands of function T is an empirical reality, not a begged question. And the notion that since people look at function and recognise it, and reason about it this means consciousness is involved so we can dismiss is frankly a priori ideology serving rhetoric in the teeth of abundant experience. But, a man convinced against his will will be of the same opinion still. What we need to do is to look on and ask if these objectors compose their posts by hiring monkeys to peck away at random. Or, fix cars by grabbing and shaking around parts at random, or the like. It is only when the sheer bankruptcy and absurdity of the positions one has to take to reject the implications of FSCI are seen and recognised that those who cling to such positions will see that they are only exposing themselves. As to the notion that FSCI is somehow already a question-begging, let me put it this way. we do know of things where function can be reached by chance based random walks, indeed we have repeatedly pointed out the case of chance based text of up to 24 ASCII characters. there is absolutely no hard roadblock to FSCI by chance. But, sometimes, when something is sufficiently isolated in the field of possibilities, such a search strategy will be so unlikely to succeed that it is operationally impossible. Darwinian type changes can and do account for cases of adapting functional body plans but he challenge is to explain the origin of these plans. That is what we just are not seeing, on evidence from real observations. The FSCI challenge and the needle in the haystack or infinite monkeys analysis tell us why. But, if we are sufficiently determined not to accept that, we can always make up objections and dismissals. But, to make an empirical demonstration to the contrary is a different matter. Remember, too, the only -- and routinely -- observed source of FSCI. (Let the objectors provide a genuine counter example, if they can. The latest try was geoglyphs, and just before magical computer simulation watches that did not have to face the real world problems of getting correct parts made that work in three dimensions with real materials with all the headaches implied. As in, a gear that works -- functions -- as a part of a clock (or a fishing reel -- just a few sand grains are enough to show why) is NOT a simple object.) KFkairosfocus
January 22, 2012
January
01
Jan
22
22
2012
02:33 PM
2
02
33
PM
PDT
I concur again. I note, that there seems to be a problem with accepting an empirically massively confirmed fact, that when we have a large enough list of y/N questions to specify a system, each of these is doubling the number of possibilities. So, when we get about 500, we have 3 * 10^150 possibilities, where the atomic resources of our solar system cannot go though more than 10^102 states to date of 10^57 atoms, most of which are in the sun, as in 98%. By far and away, most of the sets of Y/N answers will not do the job in view that is relevant to our interest. So, if we are only at most able to sample a very small fraction, the problem is to arrive at shores of function where whatever hill climbing improving algorithm can kick in. But if you refuse to accept that specific requirements of function are very restrictive, you lock out seeing this, So we see the real begged question relative to a vast database of experience. I wonder, have these folks ever had to say, wire together a motherboard, soldering up the connexions and get the thing to work? Have they ever had to design then build and get to work a complex electronic or mechanical system? Did they ever see what a little salt and sand in the wrong place can do to a fishing reel? Did they ever pull one apart and have to put it back together again, or do the same with a bicycle, or a car or the like? Did they ever build a reasonably complex bit of furniture? Did they ever draw an accurately representative facial portrait? or carve an accurate portrait as a statue? My distinct impression is that we are talking to inexperience of the reality of functionally specific complex organisation and what it takes to get there. KFkairosfocus
January 22, 2012
January
01
Jan
22
22
2012
02:01 PM
2
02
01
PM
PDT
kf, you write:
m: In 2005, Dembski provided a fairly complex formula, that we can quote and simplify: ? = – log2[10^120 ·?S(T)·P(T|H)]. ? is “chi” and ? is “phi”
Can you explain how you estimate P(T|H) for a given observed pattern?Elizabeth Liddle
January 22, 2012
January
01
Jan
22
22
2012
02:00 PM
2
02
00
PM
PDT
First of all, you (and Elizabeth) are still not understanding my definition of dFSCI and my procedure. You still equivocate its nature, its purpose, and its power.
I don't think we "equivocate" gpuccio, unless that word means something else in Italian. But I certainly have not understood it if what you say below is what it means:
dFSCI is an empirical concept. The resoning goes this way, in order a) We look at things that we know are designed (directly, because we have evidence of the desogn process). And we look for sone formal property that can help us infer design when we have not that direct evidence. b) We define dFSCI as such a property. c) We verify that on all sets of objects of which we know if they were designed or not (human artifacts, or certainly non designed objects) the evaluation of dFSCI give no false positive, although it gives a lot of false negatives.
But how do we execute this "evaluation of dFSCI"?
d) Having verified the empirical utility of dFSCI, we apply it to object whose origin is controversial (biological information). The point is, it is not important how we define dFSCI: the important thing is that it works. There are conceptual reasons why we define dFSCI as we do, but in the end they would beof no value it dFSCI did not work. It does work, and we can verify that on all the onbjects for which we have a reasonable certainty that they were designed or not. So, again, it is an empirical procedure. And it works.
But you still haven't told us how to evaluate it! Or is there a link I have missed? If so, could you link to the actual formula? Or, if it is not an actual numerical value, then can you explain what it actually is?Elizabeth Liddle
January 22, 2012
January
01
Jan
22
22
2012
01:49 PM
1
01
49
PM
PDT
GP: I concur. I add, part of this is implicit logical positivism, that does not know that it is self referentially incoherent and bankrupt. Multiply by any number of vicious and unrecognised infinite regresses and self-referential loops. Reductio ad absurdum to the max, KFkairosfocus
January 22, 2012
January
01
Jan
22
22
2012
01:47 PM
1
01
47
PM
PDT
eigenstate: You say: There is no such thing as a “purely random system”. “System” implies structure, constraint, rule, and process. But that’s not just being pedantic on casual speaking on your part, it’s the core problem here. The AA sequence is not thought to be emergent in a random way. You are not only being pedantic here, you are being illogical! One thing is that you say that "there is no such thing as a purely random system". All another thing that you state that "The AA sequence is not thought to be emergent in a random way". The second thing is not the consequence of the first, and yet you seem to connect them! Indeed, you even say that " it’s the core problem here". Wow! So, I will treat the two things separately, because separate they are. Of course ther are random systems. I have debated the thing in detail with Elizabeth. A random system is a physical system whose behaviour we can best describe by a probability distribution. It's very simple. The tossing if a die is a random system. You have no way to realistically describe each sigle result thorugh necessity laws (although each single result is certainly deterministic), but stil you can describe well enough the genral behaviour of the system through a probability distribution. OK? Then you say that the sequence of AAs in a functional protein "is not thought to be emergent in a random way". Well, thank for the information about neo darwinism, although I would say that it is not really correct: in neodarwinism, the sequence is "thought to be emergent in a random way", but gradually and with the help of NS. That's why, in modelling and analyzing the proposed algorithm of neodarwinism (as I have done in the second thread I linked, this one): https://uncommondescent.com/intelligent-design/evolutionist-youre-misrepresenting-natural-selection/comment-page-2/#comment-413684 (posts 34 and following) I have separately analyzed the random sources of variation, and the necessity mechanisms of NS, in some detail, I would say. So, it's not that I am forgetting NS: I just need a modelling of the RV component before, and dFSCI is necessary to analyze that model. So, are you denying that in the neo darwinian algorithm, all new information is thought to emerge in a random way? It can be gradual, it can after be fixed or expanded by NS, but the only sorces of genomic variation in the algorithm are random, and acn be described and modelled only thorugh some probability distribution. If you don't agree on that, please explain clearly why. There’s a fundamental difference between one-time “tornado in a junkyard” sampling of a large symbol set from a huge phase space, and the progressive sampling of that same large symbol set as the result of a cumulative iteration that incorporates positive and negative feedback loops in its iteration. I am well aware of that. I will consider any "cumulative iteration that incorporates positive and negative feedback loops in its iteration" in my final model. Can you propose some such explicit model for the origin of protein domains? So the probability of the sequence is NOT a matter of 1 shot out of n where n is vast. Yes, it is, where only RV is acting, and where no model of necessity has been documented, shown, or simply proposed. If, in my card deck example, we keep after each shuffle, the highest two cards we find out of the 104 (per poker rules, say), and set them aside as “fixed” and continue to shuffle the remaining cards, and repeat, we very quickly arrive at very powerful and rare (versus the 104 card phase space) deck after just a few iterations. That’s brutally quick as an “iterative cycle”, but it should convey the point, and the problem with “tornado in a junkyard” type probability assignments. Now, this is really "changing the cards! If I remember well, your deck of cards example was aimed at showing that the dFSCI metric was stupid, and could not evaluate the probability of a functional result arising in a random way. Now, why are you shofting to algorithms and iterative cycles? that was not the original point at all. Do you remember? Your original point was that your random sequence was "information, organization, complex, specific, and carried out a job", and therefore it was indistinguishable form a functionla sequence, and therefore dFSCI, or similar concepts, was a fraud. That was your original concept. Iteration cycles, or algorithms, were in no way part of that. To that I have abswered in detail. Again, either show where I am wrong in that specific answer, or just admit that your example of the deck of cards was wrong. I quote your original conclusion: "So, as far as I can tell, a random shuffling of two decks of cards qualifies as a “designed” configuration, per your own criteria. The only complaint I can anticipate here is that you don’t approve of the “jobs” assigned to this configuration. If that’s the case, then I will happily rest my case on this point, because as soon as you are reduced to arguing about the telic intent of the phenomenon as a pre-condition for your metric, your metric becomes useless, a meaningless distraction. You’ve already decided what your metric hoped to address in order for you to even establish the parameters for your metric." As you can see, iteraton cycles and algorithms have nothing to do with what you said. I showed clearly: a) That the definition of a function is objective and measurable, although made by an observer, and can therefore be used to measure properties relative to that specific function b) That there is no problem of "arguing about the telic intent of the phenomenon as a pre-condition for my metric": any explicitly defined function can give a specific measure of complexity, so there is no subjective restriciton to what function can be defined; and a "post hoc" definition is prefectly legitimate, because it describes an objective property, verifiable in the lab; and the recognition of a function in no way assumes that the function is designed. Therefore, I believe that I have answered all your objections for this particular point. This is, again, where the question-begging obtains. If you are going to assert that it is only “functionally specified” if it’s the product of intelligent choices or a will toward some conscious goal, then (d)FSCI *is* a ruse as a metric, not a metric toward investigating design, but a means of attaching post-hoc numbers to a pre-determined design verdict. As I have clarified, I have never said such a thing. The definition of the function in itself in no way affirms design. It is only the empirical association of complex functions (dFSCI) with design that is the basis for the empirical use of dFSCI. Which just demands a formalism around “functionally specific”? That seems to be the key to what you are saying. No special formalism is necessary. If we agree that a function is objectively there, we just measure its complexity. What formalism are you speaking of? You agreed that the enzymatic function I described was an objective fact, verifiable and measureablk in the lab. I just need to measure the probability of that fucntion emerging in a random way. What formalism do I need to do that? Please, just explain why it should be wrong to measure the informational complexity of a function that is objectively there. Can you point me to some symbolic calculus that will provide some objective measurement of a candidate phenomenon’s “functional specifity”? Why should I? I have defined the function for an enzyme, and you agree about that. You have proposed some definitions of functions for your random sequence. I have accepted them. But I have simply shown that their functional complexity is almost 0, because all random sequences can implement the functions you proposed. The concept is really simple (except for darinists): a defined function divides the search space into two subsets: the sequences that implement that function as defined, and those that do not. That is inevitable, and very simple. The function of having a specific enzymatic activity divides the set of AA sequences of a certain length into two subsets: those that have that activity, and those that do not have it. The ratio of the numerosity of the first set to the nulerosity of the search space is the dFSCI for that function. The same is true for your functional definitions. If you define the function as "being useful to crypting data", then any random sequence will have the same utility. dFSCI is practically absent, because the functional space is almost equal to the search space (although I am not a cryptation expert, I am assuming here that ordered sequences are less useful for that). It's as simple as that: there is no subjectivity here. Any defined function is valid, but each defined function will have a specific dFSCI. And defining a function in no way implies design. If you cannot, and I think you cannot, else you’d have provided that in lieu of the requirement of a conscious observer who “recognizes” functional specificity, then I think my case is made that you are simply begging the question of design in all of this, and (d)FSCI is irrelevant to the question, and only a means for discussing what you’ve already determined to be designed by other (intuitive) means. Your case is not made at all, as shown in detail in my arguments. And again, you are wrong: the initial definition of function does not imply design. There is no question begging, and there is no circularity. This renders dFSCI completely impotent on the question of design, then! That requirement — that a “conscious observer must recognize and define a function in the digital sequence” — you’ve already past the point where dFSCI is possibly useful for investigation. Never mind that the requirement is a non-starter from a methodological standpoint – “recognize” and “defined” and “function” are not objectively defined here (consider what you’d have to do to define “function” in formal terms that could be algorithmically evaluated!), even if that were not a problem, it’s too late. Always the same errors, repeated again. dFSCI, per what you are saying here, cannot be anything more than a semi-technical framework for discussing already-determined design decisions. No. There is no "already-determined design decision" at alkl in the procedure. As I have shown. And even then, you have a “Sophie’s Choice” so to speak in terms of how you define “function”. Either you make it general and consistent, in which case it doesn’t rule out natural, impersonal design processes (i.e. mechanisms materialist theories support), or you define ‘functional’ in a subjective and self-serving way, gerrymandering the term in such a way as to admit those patterns that you suppose (for other reasons) are intelligently design, and to exclude those (for other reasons) which you suppose are not. Would you like to motivate what you are saying? I have defined the function of some specific enzyme as being able to accelerate some specific biochemnical reaction (of course we have to specify which enzyme, which reaction, abd the minimal accelaration that must be observable in standard lab conditions). What could be more "objective" than that? Could you please explain why that would be "a subjective and self-serving way"? Or "gerrymandering the term in such a way as to admit those patterns that you suppose (for other reasons) are intelligently design". And where am I "excluding those (for other reasons) which I suppose are not"? It seems that biologists are doing those strange things all the time, because that's what they do when they compile the "function" section in their protein databases. Is all biology made by a group of gerrymandering IDists? Is that your idea? Moreover, I have excluded absolutely nothing. Do you want to define the function of that enzyme differently? Be my guest. If you give an objective and measurable function, we will measure dFSCI for that definition. Where is the problem? You have given functional definitions for your random sequence of cards, and I have measured dFSCI for those definitions, finding it absent. Where is the problem? What am I excluding? I think you are close to getting my point. A random sequence is highly function, just as a random sequence. True. But all random sequences implement the same function. The functional space is huge. It’s as information rich as a sequence can be, by definition of “random” and “information”, which means, for any function which requires information density — encryption security, say — any random string of significant length is highly functional, optimally functional. Yes. In the same way. All of them are optimally functional. So, the dFSCI is almost zero. It is the ration of all random strings that you can use that way to all possible sequences. Why is it so difficult for you to understand that point? If my goal is to secure access to my account and my password is limited to 32 characters, a random string I generate for the full 32 characters is the most the most efficient design possible. Yes, and any random string of 32 characters will do. Function, but not complex function. Sometimes the the design goal IS random or stochastic input. Not just for unguessability but for creativity. I feed randomized data sets into my genetic algorithms and neural networks because that is the best intelligent design for the system — that is what yields the optimal creativity and diversity in navigating a search landscape. Anything I would provide as “hand made coaching” is sub-optimal as an input for such a system; if I’m going to “hand craft” inputs, I’m better off matching that with hand-crafted processes that share some knowledge of the non-random aspects of that input. Perfectly true. I agree with those reflections about design. But what have they to do with our discussion? When you say “That is not a functional specification. Or, if it is, is a very wide one.” I think that signals the core problem. It’s only a “wide” specification as a matter of special pleading. It’s not “wide” in an algorithmic, objective way. If you think it is, I’d be interested to see the algorithm that supports that conclusion. It's very simple. Your functional definition was "wide" only because any random sequence would satisfy it. That can certainly be verified algorithmically. So, being "wide" is in no way a crime: the simple consequence is that the value of dFSCI becomes very low. Which is just to say you are, in my view, smuggling external (and spurious) design criteria into your view of “function” here. This explains why you do not offer an algorithm for determining function — not measuring it but IDENTIFYING it. If you were to try to do so, to add some rigor to the concept, I believe you would have to confront the arbitrary measures you deploy and require for (d)FSCI. If I’m wrong, providing that algorithm would be a big breakthrough for ID, and science in general. As already said, I am not smuggling anything, Any definition, or recognition, has the same potential value. The only thing required is that the defined function is objectively observable and measurable. What am I "smuggling"? What is "spurious"? And what measure is "arbitrary"? The way to measure the defined function must be explicitly given. Then, the measurement is objectively made for that function abd that method of measurement. The only number that should be conventionally accepted as appropriate for the system we are considering is the threshold of dFSCI that allows us to infer design. There is an objective reason for that. The threshold must take into account the real probabilistic resources of the system we are studying. That's why I have proposed a threshold of 150 bits for biological systems on our planet. While Dembski has proposed his UPB of 500 bits for the general design inference in the whole universe. The concept is the same. The systems considered are different. The thresholds proposed (that can be discussed at any moment, and changed at any moment, as is true of any empirical threshold for scientific reasoning) have been proposed because reasonably appropriate for the probabilistic resources of the systems considered.gpuccio
January 22, 2012
January
01
Jan
22
22
2012
01:44 PM
1
01
44
PM
PDT
Re: ES, insisting on using the other thread:
@kf, I HAVE substantiated the judgments of your comments as nonsense, handwaving, and question-begging. Several of the longish posts that took up your items 1-5 went to depth in support of the charge that you are question, begging, for example. I did not offer a curt dismissal like this: “More rubbish” That was the sum of your analysis in reply to one of my longish posts. So clearly you don’t have any problems with curt, substance free and dismissive responses in the first place, and I’ve invested a good amount of time and effort in presenting points and a rationale that support my assessment. “More rubbish” isn’t very persuasive, and comes off as lazy, as a response, so I’ve been putting some work in there to support the accusations.
Evidently ES has not realised that at this stage I do not give 50c for what would be persuasive to him or ilk. Nothing, as the problem is not the merits of fact and logic. As it stands, he has plainly refused to attend to a summary of the basic relevant info theory, from Sect A the always linked, and appendix 1, not to mention the discussion in the IOSE and in this ID foundations series -- all of which predate his objections and which in aggregate are hundreds of pages. So it looks like the issue of non-responsiveness is really on the other foot, doesn't it. For record, let it be known that information theory is a well established discipline and information is a well known, quite familiar item, measured in bits or the like. And, bits that do specific jobs that depend on fairly specific configurations that can be independently and simply described can be termed functionally specific bits, Like those behind the text of this post. One of the problem here is that formation of concepts and definitions by key examples and unifying general summaries is not well understood or accepted by those addicted to operational definitions as a one size fits all demand -- they don't know that logical positivism went belly up over 50 years ago. Of course, if we applied the regress of demanding an operational definition of an operational definition and then onward every key term that emerges, we soon enough see that operational definitions are not the be-all and end-all of definition or understanding or reasoning. And one can bellow about question-begging till the cows come home, a description of something that is observational reality trumps a regress of operational definitions any day. As in, I am POINTING TO and DESCRIBING that which is observational, and the cases are very important. Definition by key cases and family resemblance, backed up by reasonable description and modelling. And, when one is able to use paradigm cases to show how we assign say S = 1, or I to a given value in the Chi_500 expression, that should be enough for a reasonable person. As in, has ES even looked at the way Durston's fits values -- and that means functional bits -- were developed empirically, and how that can then fit in with the Chi_500 expression, or how the case of the geoglyphs in recent days shows how we can use the same expression in a nodes and arcs context? The goal of definition, in any case is clarity, not to exclude what one does not want to face. Next there seems to be a problem with the concept of a phase space cut down to a state or configuration space. I suppose this has been going on in physics and related fields for about 150 years since Gibbs? Cutting to the chase scene, the idea is that when we have something like a string of 500 bits length, there are a great many patterns that are possible, from 000 . . . 0 to 111 . . . 1, i.e. about 3*10^150 of them; that's a case of W. And as has been pointed out and linked, the Planck time quantum state resources of our observed cosmos max out at about 10^102 states of 10^57 or so atoms, so the solar system can sample about 1 in 10^48 of this zone; many of these being dynamically connected. In short, we cannot exhaust the possibilities. And, as sampling theory tells us a blind sample will only be likely to pick up the bulk of W, not any isolated and definable zone, T. Do such zones, T exist? Obviously, when we write text in coherent and correct English responding to the theme of this thread, we are in a zone that we have just defined independently of listing cases E1, E2, E3, etc from T in W. But the very rules that allow us to write in English and in so doing respond to the context here, immediately sharply constrain the acceptable sets of 500 characters. In short, we have here a case in point in reality, so a zone T is a reality, and since T is isolated per the implied constraints, getting there by a chance dominated blind process will be maximally unlikely on the gamut of our solar system. Exercises have actually been done to random generate text, and the result is that up to about 24 ASCII characters is feasible so far. That is a long way from the 72 or so that 500 bits covers. All of this has been pointed out to ES and ilk, over and over, just they are not paying any attention. When one is talking empirical realities one is not begging questions. And, I note that consistently, when concrete examples have been used to pin down generic explanations, they have been ducked. (I have given simple explanatory summaries, and these were pounced on as question-begging. Sorry, I am summarising realities, and have actually given cases in point, with links. And if you want the underlying first level analysis, I have pointed to that, at dozens of page length too. Indeed, ES was invited to respond to that but has ducked to date. Since that underlying analysis is fairly standard and is the backbone of a major industry or two, it should be clear that it is not on trial, he is.) Next, we could give specific cases like a car engine or a D'Arsonval moving coil instrument -- a specialised electric motor commonly used in instrumentation. It is absolutely a commonplace result of engineering that to get something like that to work, you have to have the right parts in the right configs, which are quite, quite specific. You will never succeed in making such an instrument by passing the equivalent of a tornado though a junkyard. In short, chance combinations are maximally unlikely to find zones T in w. And sneering dismissals of Sir Fred Hoyle as being fallacious simply show up the want of thought that has gone into the sort of live donkey kicking a dead lion involved. Now of course, I have a reason to have picked a motor, as the ATP synthetase and the bacterial flagellum as well as kinesin are all nanotech motors in the living cell. They are based on proteins, which are of course, amino acid strings, with highly specific sequences coming from deeply isolate fold domains in the space of AA chains. the AA chains in turn are coded in DNA and are algorithmically expressed using several -- dozens really -- of molecular machines in the cell. A cell that is a metabolising automaton that takes in energy and components, and transforms them into the required functions fulfilled by the machines in the cell. We happen to have highlighted cases of motors and storage elements. the ribosome is a nano factory that makes the proteins. And, all of this has a von Neumann self replicator as an additional facility that allows it to self replicate. Al of this is positively riddled with functionally specific, wiring diagram organisation, that has to have clusters of the right parts in the right places to work. As has been shown over and over again, and as can be observed. I won't bother to go on about Scott Minnich and empirical demonstration of the irreducible complexity of the flagellum. All I will say is that the machines involved anf heir protein components all are based on plainly functionally specific parts assembled in step by step code based ways, i.e we are at GP's dFSCI. That is there is a mountain of empirical evidence about the reality of FSCI int eh cell, and in life forms built up from cells. One can brush it aside if one wants, but that does not make it go away. We are looking at cases E from zones T in vast spaces W. And, those spaces are well beyond the search capcity of blind happenstance and mechanical firces ont eh scope of our solar system or the observed cosmos. If ES or the like want to dispute that, let them actually simply show us cases where similarly complex entities spontaneously form in the real world, by blind chance and mechanical necessity, Enough of verbal gymnastics. At multicellular level, let us try out the formation of the avian lung, from the bellows lung, by observed stepwise advantageous changes. Likewise, let's see how the eye came about stepwise, addressing the way incremental changes happend by mutations etc, and highlighting how each step was advantageous in a real environment niche and led to a population dominance then succession etc. SHOW it don't give us just so stories with a few samples/ they obviously have not done so. So, they do not have an empirically warranted theory of body plan origin beyond the FSCI threshold on chance plus necessity. never mind what they can impose by pushing a priori materialism. But, we can show that design routinely gets us to FSCI. Indeed, we can easily show that a sample from a field that is small in scope will be maximally unlikely to pick up isolated zones. That's what he problem of searching for a needle in a haystack is all about, and that just happens to be the foundation stone of the statistical form of the second law of thermodynamics. I trust, onlookers, that I have given you enough to see why at this stage I am not taking the fulminations of an ES particularly seriously, absent EMPIRICAL demonstration. Just as, in thermodynamics, I demand that you SHOW me a perpetual motion machine before I will believe it. GEM of TKIkairosfocus
January 22, 2012
January
01
Jan
22
22
2012
01:29 PM
1
01
29
PM
PDT
1 10 11 12 13 14

Leave a Reply