Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
Bob O'H: "True. But how would you define the search space for an organism. For example, what’s the search space for (say) a strain of the ‘flu virus?" Of course, it's simpler to compute dFSCI for smaller items. Usually I apply it to proteins. See the example of ATP synthase. We could apply the concept to a whole genome, like that of a virus. The search space is not a big problem, because it can be defined as all possible nucleotide sequences of that length (4^n). But any computation of the target space will depend on the function we define for the object, and the target space can be very complex to analyze for big functional objects like a whole viral genome. For a protein, it is easier to define an appropriate function. Usually I prefer to stick to the "local" function, that is the biochemical activity. That is certainly the best solution for enzymes.gpuccio
November 10, 2014
November
11
Nov
10
10
2014
02:30 PM
2
02
30
PM
PDT
gpuccio, Great OP. I Find it fascinating It is very similar to something I've been kicking around for comparing graphical representations of designed phenomena verses data resulting from a combination of random and algorithmic process. I too would very much like to see the evidence of false positives. Could a critic please link to an algorithm that yields positive number of bits using this calculation? If not could said critic provide evidence that such an algorithm is at least possible in theory. Once we have cleared that low hurdle we can begin the discussion of whether any of this is useful or at all relevant to biology, one thing at a time. Peacefifthmonarchyman
November 10, 2014
November
11
Nov
10
10
2014
02:26 PM
2
02
26
PM
PDT
Let me ask something. If all english speakers died out and then chinese scientists discovered english texts, could they calculate its dFsci? Could they tell the meaningful from the gibberish? Also, what is meant by information in your calculation? The information content in this sentence is not found separately in each word but in their associations. So I could write "red happens glory fishing diamond wrangler" and although each word has meaning, the phrase itself has none. Can that be calculated or determined somehow? Can an objective number be placed on it? 12 units of meaning?Collin
November 10, 2014
November
11
Nov
10
10
2014
02:21 PM
2
02
21
PM
PDT
gpuccio @51 -
1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods.
True. But how would you define the search space for an organism. For example, what's the search space for (say) a strain of the 'flu virus?Bob O'H
November 10, 2014
November
11
Nov
10
10
2014
02:19 PM
2
02
19
PM
PDT
drc466: "gpuccio, I think there’s an issue with this: 2^2113 / 2^ 2944 = 2^831. IOWs, 831 bits. From a strictly mathematical sense, your ratio is inverted." Thank you! That is a stupid error. The ratio is correct (it's the ratio of the target space to the search space), but the result is wrong: it should be 2^-831. Then, the -log2 becomes 831 bits of functional complexity. Thank you really! That's exactly what I needed. I will immediately correct the OP. If you find any other error, please tell me.gpuccio
November 10, 2014
November
11
Nov
10
10
2014
01:50 PM
1
01
50
PM
PDT
Dionisio: "Got to find how to submit my application." Encrypted, of course! :)gpuccio
November 10, 2014
November
11
Nov
10
10
2014
01:45 PM
1
01
45
PM
PDT
#46 Reality It would be appreciated if "off topic" commentaries are explicitly labeled as OT (for example see post 36). Thus the readers can skip the post when they see the label 'OT' at the beginning of the comment. BTW, are you out of touch with the meaning of your pseudonym? :) Oops! Just realized I forgot to mark posts 45 and 49 as OT. My fault. Do as I say, not as I do. :)Dionisio
November 10, 2014
November
11
Nov
10
10
2014
01:43 PM
1
01
43
PM
PDT
Friends: I am honored of the many comments, but still I would like to outline a few points in the OP which could be of interest, is someone wants to consider them (keith is exonerated, I don't want him to waste time in irrelevant things, when he has to work hard to reposting here). So, if the computation here is correct, a few interesting things ensue: 1) It is possible to compute the target space, and therefore dFSCI, for specific search spaces by some reasonable, indirect method. Of course, each space should be analyzed with appropriate methods. 2) Nobody seems to object that he knows some simple algorithm which can write a passage of 600 characters which has good meaning in English. Where are all those objections about how difficult it is to exclude necessity, and about how that generates circularity, and about how that is bound to generate many false positives? The balance at present: a) Algorithms proposed to explain Shakespeare's sonnet (or any other passage of the same length in good English): none. b) False positives proposed: none. c) True positives found: a lot. For example, all the posts in the last thread that were longer than 600 characters (there were a few). 3) We have a clear example that functional complexity, at least in the language space, is bound to increase hugely with the increase in length of the string. This is IMO an important result, very intuitive, but now we have a mathematical verification. Moreover, while the above reasoning is about language, I believe that it is possible in principle to demonstrate it also for other functional spaces, like software and proteins. Any comments? Maybe there is some room left in the intervals between one of keith's reposts and the following. :)gpuccio
November 10, 2014
November
11
Nov
10
10
2014
01:43 PM
1
01
43
PM
PDT
Keith s, 1) Despite your admiration, natural selection serves as a subtractive force in a search - it reduces the number of spaces searched. It doesn't directly affect either the target space, or the search space, numerically - it simply reduces the number of tries (think of it as rolling 10 dice, and then removing all the 2's and 3's - you've reduced your ability to hit the target if the target requires 2's and 3's). Natural selection makes it harder for evolution to get a good result, not easier. One reason for the ready acceptance of neutral theory is to improve the odds hurt by NS. 2) "If you recognize it as meaningful English, conclude that it must be designed have function." When you fix this glaring error in your "logic", it is obvious you have completely mistated the issue. The process is detect function/specificity, calculate complexity, determine design - not detect design, calculate complexity, determine design. It is certainly possible to detect function (e.g. computer generates "sky is blue") without design (search was random). Your logic fails. gpuccio, I think there's an issue with this:
2^2113 / 2^ 2944 = 2^831. IOWs, 831 bits.
From a strictly mathematical sense, your ratio is inverted.drc466
November 10, 2014
November
11
Nov
10
10
2014
01:41 PM
1
01
41
PM
PDT
20 jerry GP wrote that at least one of them is not a very expensive double agent? I wonder how much the blog pays them? I could use a few bucks now and then... maybe this is one of the 'make easy money online' ads I've seen out there? perhaps if I practice writing nonsense and asking senseless questions I could pretend being one of those guys and get hired by this blog as another anti-ID double agent? Got to find how to submit my application. Do they require a CV or resumè too? probably no photo ID or any other ID required, because they hire anti-ID pretenders. :)Dionisio
November 10, 2014
November
11
Nov
10
10
2014
01:30 PM
1
01
30
PM
PDT
Reality: Was that a comment on my computation? If it is, it's very subtle.gpuccio
November 10, 2014
November
11
Nov
10
10
2014
01:26 PM
1
01
26
PM
PDT
Mullerpr@33: "Who made rhe sieve that can size particles? Do you know many sieve like things in nature? How does size distribution become information?" Have you ever walked on a beach? That is only possible because of non-designed sieve-like thing.centrestream
November 10, 2014
November
11
Nov
10
10
2014
01:23 PM
1
01
23
PM
PDT
In these and other posts and comments on his blog, Joe explains how to measure CSI. kairosfocus, Barry, and other IDists, gaze upon the brilliant words of your fellow traveler and ilk (just two of kairosfocus's favorite attack terms when he constantly lumps, slanders, and falsely accuses "evomats", atheists, agnostics, scientists, alleged "enablers", anti-ID blogs that he calls "fever swamps", etc., etc., etc.: http://intelligentreasoning.blogspot.com/2009/03/measuring-information-specified.html http://intelligentreasoning.blogspot.com/2014/04/measuring-csi-in-biology-repost.html There's more here: http://intelligentreasoning.blogspot.com/Reality
November 10, 2014
November
11
Nov
10
10
2014
01:20 PM
1
01
20
PM
PDT
20 Jerry
I often wonder that the hostility and inanity of most of the anti-ID people is due to that they may be double agents and produce incoherent irrelevant comments to make the pro-ID people look good. Or that they are mindless and egged on by someone who is a double agent.
Sometimes I've thought of that too. The irrational nature of the anti-ID attacks and the clueless commentaries of the 'n-D e' folks make me think those guys are paid double agents just pretending. Who knows? Maybe it's true? It would be disappointing to discover they use this tricky tactic in this blog. That's why I try hard to avoid falling into the traps of their senseless arguments, but sometimes can't resist the temptation to getting involved in the discussions too. My bad. Fortunately often my comments are completely ignored by most commenters, hence I don't last long in those discussions threads. :)Dionisio
November 10, 2014
November
11
Nov
10
10
2014
01:13 PM
1
01
13
PM
PDT
gpuccio,
Why do you both repost and link to the original?
So that readers can see the comment in its original context, if they desire.keith s
November 10, 2014
November
11
Nov
10
10
2014
12:58 PM
12
12
58
PM
PDT
gpuccio,
You repost a post where you say: “Yet both KF and gpuccio admit that you don’t even need to do the calculation.” as a comment to an OP where I have done the calculation?
Of course. That's my point. As I said above:
A correct computation of an irrelevant number is still irrelevant, so it doesn’t matter whether the computation is correct. Evolution includes selection, and your number fails to take selection into account.
keith s
November 10, 2014
November
11
Nov
10
10
2014
12:56 PM
12
12
56
PM
PDT
5for, Here is my response to the second part of gpuccio's #37: gpuccio, to Learned Hand:
I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin.
gpuccio, That is true for Dembski’s CSI, but not your dFSCI. And as I pointed out above, Dembski’s CSI requires knowing the value of P(T|H), which he cannot calculate. And even if he could calculate it, his argument would be circular. Your “solution” makes the numerical value calculable, at the expense of rendering it irrelevant. That’s a pretty steep price to pay.
There are indeed different approaches to a formal definition of CSI and of how to compute it,
Different and incommensurable.
a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space.
Which is already a problem, because evolution does not seek out predefined targets. It takes what it stumbles upon, regardless of the “specification”, as long as fitness isn’t compromised.
b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space.
This is why the numerical value of dFSCI is irrelevant. Evolution isn’t searching for that specific target, and even if it were, it doesn’t work by random mutation without selection. By omitting selection, you’ve made the dFSCI value useless.
e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure.
Which immediately makes the judgment subjective and dependent on your state of knowledge at the time. So much for objectivity.
i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation). Again, this is useless because nobody thinks that complicated structures or sequences come into being by pure random variation. It’s a numerical straw man. l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object.
In other words, you assume design if gpuccio is not aware of an explicit algorithm capable of producing the sequence. This is the worst kind of Designer of the Gaps reasoning. It boils down to this: “If gpuccio isn’t aware of a non-design explanation, it must be designed!”keith s
November 10, 2014
November
11
Nov
10
10
2014
12:53 PM
12
12
53
PM
PDT
keith: Why do you both repost and link to the original? Is that functional redundancy? A secret aspiration to robustness? An attempt to reach an atemporal singularity?gpuccio
November 10, 2014
November
11
Nov
10
10
2014
12:51 PM
12
12
51
PM
PDT
Dionisio: Thank you, as always. :)gpuccio
November 10, 2014
November
11
Nov
10
10
2014
12:49 PM
12
12
49
PM
PDT
keith: You are really beyond comprehension. You repost a post where you say: "Yet both KF and gpuccio admit that you don’t even need to do the calculation." as a comment to an OP where I have done the calculation? I will never understand you!gpuccio
November 10, 2014
November
11
Nov
10
10
2014
12:48 PM
12
12
48
PM
PDT
Another comment worth reposting: Learned Hand, We’ve tumbled into a world where Logic is not spoken. KF and gpuccio claim that FSCO/I are dFSCI are useful. Gpuccio suggested a test procedure to prove this. Yet both KF and gpuccio admit that you don’t even need to do the calculation. It reveals absolutely nothing that you didn’t already know. Why would anyone bother? Gpuccio, can you come up with a test procedure in which dFSCI actually does something useful, for a change? It’s pretty clear why you and KF don’t submit papers on this stuff. Even an ID-friendly journal would probably reject it, unless they were truly desperate.keith s
November 10, 2014
November
11
Nov
10
10
2014
12:43 PM
12
12
43
PM
PDT
5for: From the other thread:
Me_Think at #644: “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.” I don’t understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection. Surely you can understand such a simple concept, can you?
And from another post:
Learned Hand: I will explain what is “simple, beautiful and consistent” about CSI. It is the concept that there is an objective complexity which can be linked to a specification, and that high values of that complexity are a mark of a design origin. This is true, simple and beautiful. It is the only objective example of something which can only derive from a conscious intentional cognitive process. There are indeed different approaches to a formal definition of CSI and of how to compute it, and of how to interpret the simple fact that it is a mark of design. I have tried to detail my personal approach, mainly by answering the many objections of my kind interlocutors. And yes, there are slight differences between my approach and, for example, Dembski’s, especially after the F. My approach is essentially a completely pragmatic formulation of the EF. In brief. a) I define a specification as any explicit rule which generates a binary partition in a search space, so that we can identify a target space from the rest of objects in the search space. b) I define a special subset of SI: FSI. IOWs, of all possible types of specification I choose those where the partition is generated by the definition of a function. c) I define a subset of FSI: those objects exhibiting digital information. d) I define dFSI the -log2 of the ratio of the target space / the search space. e) I categorize the value of dFSI according to an appropriate threshold (for the system and object I am evaluating, see later). If the dFSI is higher than the threshold, I say that the object exhibits dFSCI (see later for the evaluation of necessity algorithms) To infer design for an object, the procedure is as follows: a) I observe an object, which has its origin in a system and in a certain time span. b) I observe that the configuration of the object can be read as a digital sequence. c) If I can imagine that the object with its sequence can be used to implement a function, I define that function explicitly, and give a method to objectively evaluate its presence or absence in any sequence of the same type. d) I can define any function I like for the object, including different functions for the same object. Maybe I can’t find any function for the object. e) Once I have defined a function which is implemented by the object, I define the search space (usually all the possible sequences of the same length). f) I compute, or approximate, as much as possible, the target space, and therefore the target space/search space ratio, and take -log2 of that. This is the dFSI of the sequence for that function. h) I consider if the sequence has any detectable form of regularity, and if any known explicit algorithm available in the system can explain the sequence. The important point here is: there is no need to exclude that some algorithm can logically exist that will be one day found, and so on. All that has no relevance. My procedure is an empiric procedure. If an algorithmic explanation is available, that’s fine. If no one is available, I go on with my procedure. i) I consider the system, the time span, and therefore the probabilistic resources of the system (the total number of states that the system can reach by RV in the time span). So I define a threshold of complexity that makes the emergence by RV in the system and in the time span of a sequence of the target space an extremely unlikely event. For the whole universe, Dembski’s UPB of 500 bits is a fine threshold. For biological proteins on our planet, I have proposed 150 bits (after a gross calculation). l) If the functional complexity of the sequence I observe is higher than the threshold (IOWs, if the sequence exhibits dFSCI), and if I am aware of no explicit algorithm available in the system which can explain the sequence, then I infer a design origin for the object. IOWs, I infer that the specific configuration which implements that function originated form a conscious representation and a conscious intentional output of information form a designer to the object. m) Why? This is the important point. This is not a logical deduction. The procedure is empirical. It can be applied as it has been described. The simple fact is that, if applied to any object whose origin is independently known (IOWs, we can know if it was designed or not, so we use it to test the procedure and see if the inference will be correct) it has 100% specificity and low sensitivity. IOWs, there are no false positives. IOWs, there is no object in the universe (of which we can know the origin independently) for which we would infer design by this procedure and be wrong. Now, I will do a quick test. There are 560 posts in this thread. While I know independently that they are designed things, for a lot of reasons, I state here that any post here longer than 600 characters, and with good meaning in English, is designed. And I challenge you to offer any list of characters longer than 600, as many as you like, where you can mix two types of sequences: some are true posts in good English, with a clear meaning, taken from any blog you like. Others will be random lists of characters, generated by a true random character generator software. Well, hear me! I will recognize all the true designed posts, and I will never make a falsely positive design inference for any of the other lists. Now, you can try any trick. You can add posts in languages that I don’t know. You can add encryption of true posts that I will not recognize. Whatever you like. I will not recognize their meaning, and I will not infer design. They will be false negatives. You know, my procedure has low sensitivity. However, I will infer design for all the posts which have good meaning in English, and I will be right. And I will never infer design for a sequence which is the result of a random character generator. What about algorithms? Well, you can use any algorithm you like, but without adding any information about what has good meaning in English. IOWs, you cannot use the Weasel algorithm, where the outcome is already in the system. You cannot use an English dictionary, least of all a syntax correction software. Again, that would be recycling functional information, not generating it. But you can use an algorithm which generates sequences according to the Fibonacci series, if you like. Or an algorithm which takes a random character and generates lists with 600 same characters. Whatever you like. Because I am not using order as a form of specification. I am using meaning. And meaning cannot be generated by necessity algorithms. So, if I see a sequence of 600 A, I will not infer design for it. But for a Shakespeare sonnet I will. This is a challenge. My procedure works. It works not because is is a logical theorem. Not because I have hidden some keithian circularity in it (why should a circular procedure work, at all?). It works because we can empirically verify that it works. IOWs there could be sequences which are not designed, ad which are not obvious results of an algorithm, and which have high functional information. There could be. It is not logically impossible. But none of those sequences is known. They simply don’t exist. In the known universe, of all the objects of which we know the origin, only designed object will be inferred as designed by the application of my procedure. Again, falsify this statement if you can. Offer one false positive. One. Except for… Except, obviously, for biological objects. They are the only known objects in the universe which exhibit dFSCI, tons of it, and of which we don’t know the origin. But that is exactly the point. We don’t know their origin. But they exhibit dFSCI. In tons. So, I infer design for them (or at least, for those which certainly exhibit dFSCI). Is any algorithm known explicitly which could explain the functional information, say, in ATP synthase? No. There is nothing like that. There is the RV + NS. But it cannot explain that. Not explicitly. Only dogma supports that kind of explanation. The simple fact is: both complex language and complex function never derive from simple necessity algorithms. You cannot write a Shakespeare sonnet by a simple mathematical formula. You cannot find the sequence of ATP synthase by a simple algorithm. Maybe we could do it by a very complex algorithmic search, which includes all our knowledge of biochemistry, present and future, and supreme computational resources. We are still very distant from that achievement. And the procedure would be infinitely more complex than the outcome, and it would require constant conscious cognition (design). Well, I have not been so brief, after all. Now, if there are parts of my reasoning which are not clear enough, just ask. I am here. Or, if you just want to falsify my empirical procedure, offer a false positive. I am here. More likely, you can simply join keith in the group of the denialists. But at least, you will know more now of what you are denying.
I apologize for answering by quoting answers to others, but really I cannot follow a crowd of people who ask the same things. My main purpose here was to verify the computation with the help, or the criticism, of all.gpuccio
November 10, 2014
November
11
Nov
10
10
2014
12:42 PM
12
12
42
PM
PDT
gpuccio OT: Sorry to post this OT link in your new OP, but I thought you would like to check it out - see two consecutive posts in this link: https://uncommondescent.com/evolution/a-third-way-of-evolution/#comment-527182Dionisio
November 10, 2014
November
11
Nov
10
10
2014
12:37 PM
12
12
37
PM
PDT
Hi gpuccio I am afraid I can't comment on the calculation as my maths is not good enough, but I do wonder about the point of it. I had always thought dFSCI and its variants were a tool to detect design. But I think I remember you saying that is not the case, but dFSCI can be identified in all strings that we know are designed. So my question is where do you go from there? Lets's say you are right and you have discovered that this "thing" is present in all passages of recognisable text. What do we then use this finding to do? What can we achieve with it? (Given that we can't use it to analyse a passage of unrecognisable text (let alone a flagellum) to determine whether it was designed or not).5for
November 10, 2014
November
11
Nov
10
10
2014
11:57 AM
11
11
57
AM
PDT
What's the point? ID critics can't even manage to admit that their own posts here at UD are intelligently designed. I think perhaps it's time for us to consider seriously the idea that they aren't.Mung
November 10, 2014
November
11
Nov
10
10
2014
11:54 AM
11
11
54
AM
PDT
keith s, are you kidding me? You said: "That’s as silly as asking “How does an unintelligent sieve know how to sort particles non-randomly by size?” Who made rhe sieve that can size particles? Do you know many sieve like things in nature? How does size distribution become information? You really don't think your thoughts through do you, keith?mullerpr
November 10, 2014
November
11
Nov
10
10
2014
11:37 AM
11
11
37
AM
PDT
gpuccio:
NS can do almost nothing. Don’t believe the neo darwinian propaganda. They have nothing.
Baghdad Bob:
There are no American infidels in Baghdad. Never!
keith s
November 10, 2014
November
11
Nov
10
10
2014
11:34 AM
11
11
34
AM
PDT
gpuccio,
Any comments on the computation itself?
A correct computation of an irrelevant number is still irrelevant, so it doesn't matter whether the computation is correct. Evolution includes selection, and your number fails to take selection into account.keith s
November 10, 2014
November
11
Nov
10
10
2014
11:31 AM
11
11
31
AM
PDT
Gpuccio, I am no biologist, but reading as much as I can from James Shapiro's Evolution: A view from the 21st century, made it abundantly clear that genetic variation is far more complex and system driven than ever before realised. It seems as if the only gaps being filled are the ones cased by Darwinian ignorance. So sad to see their treasured dogma creating an explanatory vacuum in their mind... It must be painful, not to be able to move forward in science.mullerpr
November 10, 2014
November
11
Nov
10
10
2014
11:29 AM
11
11
29
AM
PDT
True story about them not having anything.... Still waiting for Keith S to explain how unguided evolution built multiple stability control mechanism in cells..... Nothing yetAndre
November 10, 2014
November
11
Nov
10
10
2014
11:25 AM
11
11
25
AM
PDT
1 28 29 30 31

Leave a Reply