Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
computerist: "gpuccio, what is your opinion that a greater degree of CSI must be present before an ever increasing amount of CSI/dFCSI can be produced?" I am not sure what you mean. Can you explain better? Thank you.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
02:00 PM
2
02
00
PM
PDT
Zachriel: "In any case, William Shakespeare had access to a huge amount of preexisting data, such as a mental dictionary, not to mention a huge amount of experience in Elizabethan society. But you want an algorithm to come up with sonnets without any input as to what sells theater tickets." The only thing I want is to infer that original sonnets are generated by conscious beings, and not by algorithms.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
01:59 PM
1
01
59
PM
PDT
DNA_Job: Now I have a little more time, and I can answer you better. Let's try this way. You say: "How can you not see that in this analogy the bullet hole represents the biochemical activity?" And: "You can make the probability arbitrarily small by making the specification arbitrarily precise." And you use those concepts, and others, to criticize any post-specification as a logical fallacy. Now, to remain concrete, let's apply these concepts to my sonnet example. Let's take a Shakespeare sonnet of about 600 characters, which I find somewhere, and which I don't know in advance, and don't know is Shakespeare's. Let's say that I don't know anything about it, and that for what I know of its origin it could be a random string. Now, the sonnet, being indeed Shakespeare's, existed before my arrive. I am sorry to disappoint some of my readers, but I am not so old. Therefore, any consideration I can make on the sonnet is a post-specification. Now, I observe three things: a) The sequence has a good meaning in English, which I perfectly understand (and which I immediately like, but this is not relevant). b) It is, indeed, an English composition in rhymed verse. c) It is, indeed, a sonnet (specific verse structure). Now, I take each of these things as specification, in turn, and compute the dFSCI accordingly. So, we have three different post-specifications, and three different computations. For the first specification, I obtain s functional information of at least 673 bits (I am accepting Roy's proposal), certainly vastly underestimated. Now, I don't want to delve into the target space of rhymed verse and of sonnets, so let's just imagine the other two results. It will be enough, for my reasoning. We have already ascertained that there can be ways to compute those numbers indirectly, at least a lower thresholds of complexity. I think we can agree that the target space for b) is smaller than for a) , and for c) it is smaller than for b). So, let's say that b) has a lower threshold of complexity of 1000 bits, and c) of 1500. Just to discuss. So, according to a general UPB of 500 bits, and being aware of no algorithm (especially non designed) which can write sonnets any more than English text, I can safely infer design for the object, according to my procedure, with all three different analyses. OK. Now, your concepts. According to your views, none of the three specifications is valid. All of them are post-specifications. Moreover, you say that the sonnet itself with its functionalities is the bullethole. OK. So, when I arrive and say: "This is a passage with good meaning in English" I am painting an arbitrary target around the object. Is that your idea? There is more. When I say: this is an English composition in rhymed verse, according to your concepts, I am again painting an arbitrary target, only this time I am probably trying to "make the probability of the object arbitrarily small by making the specification arbitrarily precise". A big fallacy, indeed. But I am not satisfied. So I pass to c). Again, I am painting an arbitrary target, and again I am trying to "make the probability of the object arbitrarily small by making the specification arbitrarily precise". What a devious thinker I am! Now, we have a problem. Never satisfied, I still want to go on in "making the probability of the object arbitrarily small by making the specification arbitrarily precise". But I cannot use the real bits in the object, because you have already warned me that, if I do that, I am doomed. And even I, the treacherous pseudo-scientist, know that there are limits that are best left alone. So, I am rather at an impasse. Without using the specifics of the sequence (what rhymes it contains, how many vowels, and so on), it becomes difficult. OK, I have probably one or two options left. I could define the verse (iambic pentameter?). Maybe something else. But how long can I make the probability arbitrarily small by making the specification arbitrarily precise? The point is, up to now I have only described in my specifications real properties of the sonnet. I have invented nothing. OK, I have used different levels of detail, but each one of them was correct. From now on, I should probably invent things that are not there. I don't really feel that I am the "arbiter" of this situation! OK, maybe I will be satisfied with my triple and correct design inference. After all, you will criticize me anyway! :) Ah, and I must really have been born lucky. Nobody has still offered any false positive to my fallacious procedure completely based on post-specifications.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
01:56 PM
1
01
56
PM
PDT
Zachriel, No one expects the algorithm to write a sonnet. That's the point. A sonnet is an act of intelligence. No one expects an algorithm to randomly generate a sonnet any more than any one expects a solar powered muddy bog to randomly generate much more complicated life.Edward
November 11, 2014
November
11
Nov
11
11
2014
12:58 PM
12
12
58
PM
PDT
Gpuccio,
“NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).”
Are you really saying that if I have what you call “a local optimum” which has functional specificity of 1600 bits, and there are a few other distant local optimums for the same function (distant, because they are never found by variation form our first local optimum in billions of years), something changes? How many “local optimums” for ATP synthases do you imagine exist? 2^1000? Is that what you are suggesting?
Well your definition of “distant” is wrong. Tough to say which local optima have never been found, when all we have to go on is the ones that survived. 2^1000 seems possible, but the number could be a lot higher.
“How can you not see that in this analogy the bullet hole represents the biochemical activity?”
Strange. In my discourse, like in all the discourses about the fallacy, the bullet hole is the result of a random act, and the target gives meaning to it (see also Dembski).
And this discourse is no different. :) Is it beginning to dawn on you yet?
So, excuse me, but the bullet hole is some sequence we observe, and the biochemical activity is the target. IOWs, the bullet hole of variation has hit the target of the biochemical activity.
No. You seem unable to grasp the difference between the biochemical activity (which predates humans) and the specification, which is the human paintjob.
My point is that any specification which is complex enough is a marker of design. It’s you who do not understand my point.
Well I agree with you that any specification which is complex enough is a marker of intelligence. But you are trying to claim that an object that meets a sufficiently complex specification must be designed. When the specification is written post-hoc, that is just plain silly.
There is no way to make a function for a sequence “arbitrarily precise”, if I keep the functional specification independent from the sequence itself. [I]OWs, when I specify the sonnet as a passage with good meaning in English, I am not saying “any sonnet with the following characters in this order”
I retained your bit about sonnets here, since it helps clarify your intended meaning. You are claiming that, so long as I stay away from specifying the protein sequence, there is no way for me to make the specification for ATP synthase arbitrarily precise. Here goes: ATP synthase having Km for Mg.ATP between 0.9e-4 and 1.1e-4 Ki for ADP between 2.8e-4 and 3.1e-4 Ks for Mg2+ having the following pH dependence: pH Ks 7.2 1e-4 7.3 0.9e-4 7.4 0.6e-4 7.5 0.4e-4 7.6 0.2e-4 These values at 25 C in 0.1M KCl. At 0.11M KCL, the values should be......should I go on? Or I could add some stuff about the rate at which Mg2+ and ADP cause the inactivation of the enzyme Or temperature-dependence. I haven’t even mentioned the k.cat ‘s The simple fact of the matter is that you, personally, have been caught re-writing your specification in order to retain the “specialness“ of what you now term the “traditional ATP” synthase.DNA_Jock
November 11, 2014
November
11
Nov
11
11
2014
11:11 AM
11
11
11
AM
PDT
gpuccio, what is your opinion that a greater degree of CSI must be present before an ever increasing amount of CSI/dFCSI can be produced?computerist
November 11, 2014
November
11
Nov
11
11
2014
10:23 AM
10
10
23
AM
PDT
gpuccio: And William Shakespeare included a fine consciousness and sensibility, and much more, which was well beyond the information available to his “algorithm”. In any case, William Shakespeare had access to a huge amount of preexisting data, such as a mental dictionary, not to mention a huge amount of experience in Elizabethan society. But you want an algorithm to come up with sonnets without any input as to what sells theater tickets.Zachriel
November 11, 2014
November
11
Nov
11
11
2014
10:02 AM
10
10
02
AM
PDT
gpuccio As said many times, NS is another matter. As there is no algorithm which can explain a complex sonnet, there is no algorithm which can explain a complex function. But that is another part of the reasoning Amazing that you've never heard of fractals or the Mandelbrot set. There is even evidence that the early multicellular life forms in the Ediacaran grew with a fractal format.
Fractal branching organizations of Ediacaran rangeomorph fronds reveal a lost Proterozoic body plan Cuthill, Morris PNAS September 9, 2014 vol. 111 no. 36 Summary: Rangeomorph fronds characterize the late Ediacaran Period (575–541 Ma), representing some of the earliest large organisms. As such, they offer key insights into the early evolution of multicellular eukaryotes. However, their extraordinary branching morphology differs from all other organisms and has proved highly enigmatic. Here we provide a unified mathematical model of rangeomorph branching, allowing us to reconstruct 3D morphologies of 11 taxa and measure their functional properties. This reveals an adaptive radiation of fractal morphologies which maximized body surface area, consistent with diffusive nutrient uptake (osmotrophy). Rangeomorphs were adaptively optimal for the low-competition, high-nutrient conditions of Ediacaran oceans. With the Cambrian explosion in animal diversity (from 541 Ma), fundamental changes in ecological and geochemical conditions led to their extinction.
Simple iterative processes that produce great complexity (and gobs of CSI / dFSCI / FIASCO). Whoda thunk? :)Adapa
November 11, 2014
November
11
Nov
11
11
2014
10:02 AM
10
10
02
AM
PDT
Amazing to the materialist, nothing can do anything and everything.... Search spaces, decide on the best solution, reverse engineer, problem solve, build things, create CSI. Nothing is truely a miracle worker it can do everything. All praise nothing!!!Andre
November 11, 2014
November
11
Nov
11
11
2014
09:49 AM
9
09
49
AM
PDT
DNA_Jock: "NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco)." Are you really saying that if I have what you call "a local optimum" which has functional specificity of 1600 bits, and there are a few other distant local optimums for the same function (distant, because they are never found by variation form our first local optimum in billions of years), something changes? How many "local optimums" for ATP synthases do you imagine exist? 2^1000? Is that what you are suggesting? "How can you not see that in this analogy the bullet hole represents the biochemical activity?" Strange. In my discourse, like in all the discourses about the fallacy, the bullet hole is the result of a random act, and the target gives meaning to it (see also Dembski). So, excuse me, but the bullet hole is some sequence we observe, and the biochemical activity is the target. IOWs, the bullet hole of variation has hit the target of the biochemical activity. The biochemical activity is a function. I can define it in different ways. No problem. The point is that if I need a lot of specific bits to implement that function, that function is complex. But there are other complex functions. And so? As I have said, there are sonnets in many languages. Does that invalidate my design inference for a sonnet in English? If it is so, why nobody can provide a false positive to the target I painted on the sonnet? "Here we see the fallacy in its distilled form. Your first sentence is correct. The bullet holes have been there since before humans existed. The PAINT is the human artefact. And how you paint the circles depends on which bullet holes you have discovered to date. As you have inadvertently demonstrated on this thread. Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work." And all that is completely irrelevant. My point is that any specification which is complex enough is a marker of design. It's you who do not understand my point. "Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise." What do you mean? My specification of the Shakespeare sonnet is a post-specification. If it is a fallacy, how is it that it works so well? There is no way to make a function for a sequence "arbitrarily precise", if I keep the functional specification independent from the sequence itself. OWs, when I specify the sonnet as a passage with good meaning in English, I am not saying "any sonnet with the following characters in this order". I am defining a partition which is independent from the specific character sequence. Even the reference to a language, English, has nothing to do with the specific characters in the sequence. Those same characters can be used in many other languages, or in any string without any meaning. The reference to meaning is a direct reference to a conscious experience, and the reference to English is a reference which is independent from the system of a random character generator. Therefore, my specification works. In the same way, the sequence of nucleotides in a protein coding gene, as transformed by RV, is completely independent from function and from the protein space. So, there is no way that I can narrow my definitions so that I can make any results of a random search more likely. As said many times, NS is another matter. As there is no algorithm which can explain a complex sonnet, there is no algorithm which can explain a complex function. But that is another part of the reasoning.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
09:34 AM
9
09
34
AM
PDT
“The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.”
And what is informative about that history? Just to understand.
Bottom-up studies, such as Keefe and related work. Sadly, you have some strange ideological resistance to these studies, perhaps related to the results they provide.
The same is true for ATP synthase. Nobody can deny the high level of specified information which is necessary for the protein to work in that form.
The issue is with your inclusion of the word “specified” here.
As I have said many times, if many other sequences could be enough for the protein to work, neutral variation would have found many of them. It hasn’t.
NOOOOO. This is terribly wrong, and perhaps at the root of your inability to see the problem. Neutral variation will explore the width of that one local optimum. The extent of neutral variation says NOTHING about whether there are other local peaks, either nearby, or far away (Nina’s bullet holes), or whether there is an even higher peak in the region (see REC’s citations on Rubisco).
How can you not see that a phrase like: “The bullet holes have been in the wall since before any humans existed.” is simply obfuscation? If you are saying that the proteins were there, but they did nothing, and started to work as soon as we looked at them, have the courage to say so.
How can you not see that in this analogy the bullet hole represents the biochemical activity?
No. The proteins were there, and they did exactly what they do now. The bullet holes and the targets were there since before any humans existed.
Here we see the fallacy in its distilled form. Your first sentence is correct. The bullet holes have been there since before humans existed. The PAINT is the human artefact. And how you paint the circles depends on which bullet holes you have discovered to date. As you have inadvertently demonstrated on this thread. Yes ATP was getting synthesized before humans existed, but the specification “ATP synthase” was generated by humans AFTER the biochemical activity was delineated. And re-defined by you in the light of Nina’s work.
the only purpose of this attitude seems to be to discredit a perfectly valid scientific post specification as though it were a logical fallacy, as though any post specification were a fallacy. Which is simply not true.
Well, I have yet to see an IDist come up with a post-specification that wasn’t a fallacy. Let’s just say that you have to be really, really, really cautious if you are applying a post-facto specification to an event that you have already observed, and then trying to calculate how unlikely that specific event was. You can make the probability arbitrarily small by making the specification arbitrarily precise.DNA_Jock
November 11, 2014
November
11
Nov
11
11
2014
09:08 AM
9
09
08
AM
PDT
drc466: I am not sure that I understand your example. "Imagine for a moment that you have a lottery that consists of 50 numbers from 1-1000." What do you mean exactly? How is the lottery structured? "Your Target Space is 1 (winning number)," What do you mean? What is the object conveying the information? Or about which you are trying to make the design inference? "and your Search Space is approximately 2^500 (obviously, no lotto would do this because no one would ever win – but it could happen)." Do you mean that there are 2^500 tickets? And one is extracted? But you said "50 numbers from 1-1000". Please, explain better. "The winning number conveys information – “What is the winning number to the Super-Stupid Lotto!”," So, let's say that your object is a paper with the winning number? " and a dFSCI calculation is 500bits." In what sense? That is true only if you define a random system as a method to preview (or guess) the winning number. Obtaining any pre-specified number out of 2^500 by a random search is indeed almost magic. So, let's say that you have a random number generator which gives you a number in one attempt, and you say: this number tomorrow will win the lottery. And then it happens. Many would be suspicious... Perhaps I understand what you mean. In a miraculous pre-announcement of the winning number, the unexplained dFSCI is not in the number itself (which is a simple piece of information), but in the system which chooses it as the future winner. The dFSCI is in the system. So, the two hypotheses are: a) You and the system you use have been extremely lucky (but try to convince the judges) b) The system is designed (IOWs, you fixed the lottery so that you could announce in advance the winner). The design here is not in the number, but in the system. It is not the number itself, or its sequence, which brings the information. Is that what you meant?gpuccio
November 11, 2014
November
11
Nov
11
11
2014
08:58 AM
8
08
58
AM
PDT
gpuccio, I have an objection that I hope you will find on-topic. The objection is that I'm not sure that English phrases (or any written form of communication), independent of the "information" they convey, are a valid test of the dFSCI. My "false positive" example is a lottery drawing. Imagine for a moment that you have a lottery that consists of 50 numbers from 1-1000. Your Target Space is 1 (winning number), and your Search Space is approximately 2^500 (obviously, no lotto would do this because no one would ever win - but it could happen). The winning number conveys information - "What is the winning number to the Super-Stupid Lotto!", and a dFSCI calculation is 500bits. So dFSCI would say "Yes, designed", when the winning number was just randomly produced. Is the flaw in my logic that, since a human had to "pick" the winning number, it is "designed"? I'm curious where this argument breaks down. (My objection would be that the information conveyed (winning lotto #) comes in at less than 500 bits, even though the method of conveying it (50 3-digit numbers) comes in at more. Unfortunately, that would seem to invalidate using the symbology as a valid test, and makes the true calculation difficult(impossible?)).drc466
November 11, 2014
November
11
Nov
11
11
2014
08:16 AM
8
08
16
AM
PDT
Keith s, The more you argue, the less I think of your comprehension skills. Which is why I always stop arguing with you eventually. One last try:
You get exactly the same answer whether or not you do the calculation, in 100% of the cases
Exactly...wrong. My "sky is blue" example should have been sufficient, but here's a longer explanation: Assume we are trying to detect design in english phrases. We have a computer that is generating a single random phrase, and a person writing a single meaningful sentence. Can we detect which produced the following (e.g. which is designed)? 1) "I" - english word (function); People will agree it could be the computer; dFSCI says unknown; may or may not be designed 2) "SKY IS BLUE" - english phrase (function); looks design-y; people will disagree whether a computer could have kicked it out; dFSCI says unknown; may or may not be designed 3) 600-word Shakespearean sonnet (function); looks design-y; some People will disagree whether a computer could have kicked it out (hard as that may be to believe); dFSCI says MUST BE DESIGNED; must be designed (human wrote it). You're getting hung up because we're discussing easily-recognizable "designed" objects (words, machines, etc.), where "common sense" leads almost everyone to agree on the answer. The whole point of trying to come up with a valid calculation is so that we can use it on functional things that aren't human-made and therefore not easily recognizable - life being one of those. 1) ATP-Synthase/PCD/Flagellum - has function, looks design-y 2) People will disagree whether it was intelligently designed 3) Perform dFSCI calculation 4) Calculation shows that it must be designed 5) People will disagree whether dFSCI is a valid calculation Regardless of point 5, your objections that you get the same answer whether or not you perform the calculation (see points 2 and 3 of my example) is flat wrong, and your objection that the calculation is irrelevant is therefore also wrong.drc466
November 11, 2014
November
11
Nov
11
11
2014
08:01 AM
8
08
01
AM
PDT
DNA_Jock: "I know you don’t like it. From your attempts to refute it, it appears that you don’t understand it." Your opinion. I could simply counter that you don't understand ID. "The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity." And what is informative about that history? Just to understand. "The bullet holes have been in the wall since before any humans existed." As were the biochemical activities. Or am I missing something? "Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one. Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations. Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed." Absolutely not. What I find is a complex multi chain protein which works in a very brilliant way to generate ATP form a proton gradient. What I find is that this protein, in its rotor part, requires a strong conservation of the sequence of two chains. What I find is that the protein is functional and conserved. My calculations are not destroyed. To infer design, what I need is to find specific information linked to a function. I can redefine the function if necessary, but the concept is that any high level of specific information linked to any explicitly defined function is a mark of design. You seem not to understand that, but it is exactly the reason why I can infer design for the Shakespeare sonnet either I define the function as being s sonnet in English, or more generically as being a passage in English. In both cases, the linked information is extremely high, even if not the same. You seem to forget that our purpose in measuring dFSCI is simply to detect design. I detect design in the sonnet, and I am right. You cannot give a false positive, because my definition of a context which guarantees a correct design inference is right. The same is true for ATP synthase. Nobody can deny the high level of specified information which is necessary for the protein to work in that form. As I have said many times, if many other sequences could be enough for the protein to work, neutral variation would have found many of them. It hasn't. The Alveolata protein is another machine, made with different components. Its complexity is probably comparable to the complexity of the traditional protein, but it is another molecule. That's why it uses other chains, which are different from the chains in the traditional molecule. So, let's say that we have two very different cars, say a small Ford and a Ferrari. They have different carburettors. You cannot mount the Ferrari carburettor in the Ford, and probably they look very different. So you say: "see, they have the same function, but they are very different. That proves that it is very easy to implement the function, any carburettor will do." No, The Ferrari carburettor is different and specific. As different and specific are the chains in traditional ATP synthase. How do we know that they are specific? Because they are extremely conserved. So, all you arguments about painting and post-specification are simply wrong. You obfuscate, certainly in good faith, but you obfuscate just the same. How can you not see that a phrase like: "The bullet holes have been in the wall since before any humans existed." is simply obfuscation? If you are saying that the proteins were there, but they did nothing, and started to work as soon as we looked at them, have the courage to say so. It would be a strange application of quantum mechanics to biology, but at least it would be consistent. No. The proteins were there, and they did exactly what they do now. The bullet holes and the targets were there since before any humans existed. Your obfuscation is that you try to confound methodological problems which legitimately arise when we try to scientifically describe both the bulletholes and the targets with a false argument that we are painting the targets from scratch, and the only purpose of this attitude seems to be to discredit a perfectly valid scientific post specification as though it were a logical fallacy, as though any post specification were a fallacy. Which is simply not true. You say that I don't like your argument. It's true. I don't like it, because it is wrong and unscientific.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
07:39 AM
7
07
39
AM
PDT
Me_Think: "How is Shakespeare a ‘Super Design’ ?" It was just a personal appreciation from the heart for the quality of his poetry!gpuccio
November 11, 2014
November
11
Nov
11
11
2014
07:12 AM
7
07
12
AM
PDT
DNA jock:
The problem with all formulations of ID to date is that “design” can be generated by processes of trial-and-error, irrespective of whether any intelligent intervention occurred.
Easier said than demonstrated, of course.Joe
November 11, 2014
November
11
Nov
11
11
2014
07:06 AM
7
07
06
AM
PDT
gpuccio, Your #150:
DNA_Jock[:] You know what I think of your “painting” argument.
I know you don’t like it. From your attempts to refute it, it appears that you don’t understand it.
With ATP synthase, the problem is different. I chose the alpha and beta subunits of ATP synthase as a good easy example of very high dFSCI, which they are, because they are long sequences with very high conservation and a very clear function in the context of a bigger multi-sequence molecule. As you well know, it is not an isolated example of high functional conservation. I have mentioned also histone H3, which is shorter but even more conserved. I have always said clearly that those two sequences are only part of a more complex molecule. The Alveolata ATP synthase is another complex molecule, which uses other sequences. In no way that means that the specific sequence of alpha and beta chains of the common form of ATP synthase is not essential to the functioning of the molecule as it is. If that were not the case, why would those AA positions have been conserved in spite of all possible neutral variation?
The degree of sequence conservation tells us how tight the peak is at the local optimum. It is rather uninformative about the history of the biochemical activity.
I am not redefining anything. I have always reasoned about the molecular assembly of ATP synthase “in common form”. For the working of that molecular assembly, those two chains are essentially conserved and necessary.
Here you show your failure to comprehend the objection. I will try to explain one more time. The bullet holes have been in the wall since before any humans existed. Along comes John Walker: “Look at this bullet hole I found”. Others find a tight grouping of bullet holes around this one. Along comes gpuccio, paints a circle around the bullet holes and calls his circle “the functional specification for ATP synthase”. Does some calculations. Along comes Praveen Nina and others, and points to a bullet hole that is a long, long way away from Walker’s tight grouping, but still falls within gpuccio’s original specification “ATP synthase”. His calculations are destroyed. In light of Nina et al, gpuccio does two things: 1) He re-draws his circle so that it now excludes Alveolata, and renames the Walker circle “the traditional ATP synthase” 2) he draws a brand-spanking-new circle around the Alveolata bullet hole(s) because it is a “very different complex molecule, made of many different protein sequences, and is a complex example of a different engineering solution”. Mischief managed. How can you not see that all of your specifications are post-hoc?
You must clarify your position: are you denying that it is possible to measure functional complexity, in language as in proteins? Or are you just suggesting a better way to do it?
I am not denying that it is possible, in either case. I think it is rather difficult, in both cases. I note in passing that the two cases are rather different, hence my lack of interest in the original topic of this thread.
If you are in the denialist position, I invite you to explain what is wrong in my reasoning about the Shakespeare Sonnet, and then to provide a false positive, or just explain how is it that a wrong reasoning works so well.
I am happy to stipulate that “design” can be detected. The problem with all formulations of ID to date is that “design” can be generated by processes of trial-and-error, irrespective of whether any intelligent intervention occurred. There is a strange irony to the fact that one of your objections to Keefe & Szostak is that they chose ATP binding. You chose “ATP synthase”, rather than “APP synthase” or an infinite number of other biochemical activities, because it exists. They, at least, have an excuse.DNA_Jock
November 11, 2014
November
11
Nov
11
11
2014
06:25 AM
6
06
25
AM
PDT
GP @ 154
I am well aware that mine is “a severe under estimate”, but it seems good enough to disturb our interlocutors a little! :) And I agree, absolutely, that Shakespeare is “super design”.
How is Shakespeare a 'Super Design' ?Me_Think
November 11, 2014
November
11
Nov
11
11
2014
06:00 AM
6
06
00
AM
PDT
fifthmonarchyman @ 147
me think said, What do you mean by moving down the Y-Axes? I say,check out comment 81
What you should do is check the entropy.Me_Think
November 11, 2014
November
11
Nov
11
11
2014
05:57 AM
5
05
57
AM
PDT
Adapa:
The purpose of a dFSCI calculation is not to convince anyone in the scientific community of its design detection worth.
This alleged scientific community doesn't have any methodology that comes close to being as good as CSI and dFSCI. That means their complaints are just whining.Joe
November 11, 2014
November
11
Nov
11
11
2014
05:53 AM
5
05
53
AM
PDT
DNA jock- REC cannot explain any ATP synthase. Unguided evolution is incapable of producing them.Joe
November 11, 2014
November
11
Nov
11
11
2014
05:50 AM
5
05
50
AM
PDT
Zachriel:
Evolutionary algorithms require an interface to an environment of some sort.
Evolutionary algorithms are examples of intelligent design evolution. They don't have anything to do with unguided evolution.Joe
November 11, 2014
November
11
Nov
11
11
2014
05:49 AM
5
05
49
AM
PDT
#140 fifthmonarchyman Very interesting indeed. Thank you for the link to the PDF document that inspired you to work on that project. :)Dionisio
November 11, 2014
November
11
Nov
11
11
2014
05:42 AM
5
05
42
AM
PDT
KF: I am well aware that mine is "a severe under estimate", but it seems good enough to disturb our interlocutors a little! :) And I agree, absolutely, that Shakespeare is "super design". As are many exceptional proteins whose biochemical efficiency is overwhelming. I think we agree that a design inference does not necessarily imply optimal design. But, when we observe optimal design, it's simple fairness to recognize it. Our heartfelt gratitude, then, to Shakespeare and to all the great designers in this world.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
05:13 AM
5
05
13
AM
PDT
F/N 2: GP thanks, I snatch another quick pause. The confinement to English text alone already builds in a whole apparatus of rules, conventions, and structures that are FSCO/I rich, so the estimations you do will be quire conservative, a severe under estimate. I tend to think physically and so I think in terms of a string register with the possibility of something like zener noise filling it, and that defines the point that any of the 128 ASCII codes can appear, that whether or not that is flat random that is not constrained by the physics at work. Thus, that this leads to the situation where the real space of possibilities for a register of length n seven-bit characters, is 128^n. Just 72 ASCI characters would exhaust the resources of the sol system, and 143 those of the observed cosmos, to generate anything more than a very sparse, vanishingly sparse, sample that we only have reason to expect will snapshot the bulk, not special zones such as text in Elizabethan English. However the message is still the same, text in such patterns is reflective of such special characteristics that are separately identifiable that we have no good reason to expect that on blind search whether scattershot or random walk, we will reasonably ever produce such. Design routinely produces such, though Shakespeare is anything but routine. KFkairosfocus
November 11, 2014
November
11
Nov
11
11
2014
05:05 AM
5
05
05
AM
PDT
Zachriel: "Evolutionary algorithms require an interface to an environment of some sort. Turns out that Shakespeare also incorporated information from his cultural environment. For instance, the William Shakespeare algorithm included an extensive dictionary, grammar rules, stock phrases, scansion, personality types, history, and so on." And William Shakespeare included a fine consciousness and sensibility, and much more, which was well beyond the information available to his "algorithm". I must say that a phrase like "the William Shakespeare algorithm", used by an intelligent person like you (and, I am sure, purposefully), has a strange effect on me. Not really good.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
04:57 AM
4
04
57
AM
PDT
KF: Thank you! :)gpuccio
November 11, 2014
November
11
Nov
11
11
2014
04:52 AM
4
04
52
AM
PDT
DNA_Jock: You know what I think of your "painting" argument. With ATP synthase, the problem is different. I chose the alpha and beta subunits of ATP synthase as a good easy example of very high dFSCI, which they are, because they are long sequences with very high conservation and a very clear function in the context of a bigger multi-sequence molecule. As you well know, it is not an isolated example of high functional conservation. I have mentioned also histone H3, which is shorter but even more conserved. I have always said clearly that those two sequences are only part of a more complex molecule. The Alveolata ATP synthase is another complex molecule, which uses other sequences. In no way that means that the specific sequence of alpha and beta chains of the common form of ATP synthase is not essential to the functioning of the molecule as it is. If that were not the case, why would those AA positions have been conserved in spite of all possible neutral variation? I am not redefining anything. I have always reasoned about the molecular assembly of ATP synthase "in common form". For the working of that molecular assembly, those two chains are essentially conserved and necessary. You must clarify your position: are you denying that it is possible to measure functional complexity, in language as in proteins? Or are you just suggesting a better way to do it? If you are in the denialist position, I invite you to explain what is wrong in my reasoning about the Shakespeare Sonnet, and then to provide a false positive, or just explain how is it that a wrong reasoning works so well.gpuccio
November 11, 2014
November
11
Nov
11
11
2014
04:50 AM
4
04
50
AM
PDT
GP, busy -- doubly so today -- but spotted this; in the Intro-summary IOSE, the Voynich Manuscript is featured. Part of the problem is a confusion that design recognition is a universal decoding process, which obviously is highly dubious on computation theory. Just the drawings as well as the context of being a codex, are enough to show design per manifest FSCO/I. By whom, why, with what possible decoding of the apparent text, are other questions well beyond the core issue of the design inference. Gone, KFkairosfocus
November 11, 2014
November
11
Nov
11
11
2014
04:42 AM
4
04
42
AM
PDT
1 24 25 26 27 28 31

Leave a Reply