Uncommon Descent Serving The Intelligent Design Community

An attempt at computing dFSCI for English language

Categories
Intelligent Design
Share
Facebook
Twitter/X
LinkedIn
Flipboard
Print
Email

In a recent post, I was challenged to offer examples of computation of dFSCI for a list of 4 objects for which I had inferred design.

One of the objects was a Shakespeare sonnet.

My answer was the following:

A Shakespeare sonnet. Alan’s comments about that are out of order. I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english).
Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

In the discussion, I admitted however that I had not really computed the target space in this case:

The only point is that I have not a simple way to measure the target space for English language, so I have taken a shortcut by choosing a long enough sequence, so that I am well sure that the target space /search space ratio is above 500 bits. As I have clearly explained in my post #400.
For proteins, I have methods to approximate a lower threshold for the target space. For language I have never tried, because it is not my field, but I am sure it can be done. We need a linguist (Piotr, where are you?).
That’s why I have chosen and over-generous length. Am I wrong? Well, just offer a false positive.
For language, it is easy to show that the functional complexity is bound to increase with the length of the sequence. That is IMO true also for proteins, but it is less intuitive.

That remains true. But I have reflected, and I thought that perhaps, even if I am not a linguist and not even a amthematician, I could try to define better quantitatively the target space in this case, or at least to find a reasonable higher threshold for it.

So, here is the result of my reasonings. Again, I am neither a linguist nor a mathematician, and I will happy to consider any comment, criticism or suggestion. If I have made errors in my computations, I am ready to apologize.

Let’s start from my functional definition: any text of 600 characters which has good meaning in English.

The search space for a random search where every character has the same probability, assuming an alphabet of 30 characters (letters, space, elementary punctuation) gives easily a search space of 30^600, that is 2^2944. IOWs 2944 bits.

OK.

Now, I make the following assumptions (more or less derived from a quick Internet search:

a) There are about 200,000 words in English

b) The average length of an English word is 5 characters.

I also make the easy assumption that a text which has good meaning in English is made of English words.

For a 600 character text, we can therefore assume an average number of words of 120 (600/5).

Now, we compute the possible combinations (with repetition) of 120 words from a pool of 200000. The result, if I am right, is: 2^1453. IOWs 1453 bits.

Now, obviously each of these combinations can have n! permutations, therefore each of them has 120! different permutation, that is 2^660. IOWs 660 bits.

So, multiplying the total number of word combinations with repetitions by the total number of permutations for each combination, we have:

2^1453 * 2^660 = 2^2113

IOWs, 2113 bits.

What is this number? It is the total number of sequences of 120 words that we can derive from a pool of 200000 English words. Or at least, a good approximation of that number.

It’s a big number.

Now, the important concept: in that number are certainly included all the sequences of 600 characters which have good meaning in English. Indeed, it is difficult to imagine sequences that have good meaning in English and are not made of correct English words.

And the important question: how many of those sequences have good meaning in English? I have no idea. But anyone will agree that it must be only a small subset.

So, I believe that we can say that 2^2113 is a higher threshold for out target space of sequences of 600 characters which have a good meaning in English. And, certainly, a very generous higher threshold.

Well, if we take that number as a measure of our target space, what is the functional information in a sequence of 600 characters which has good meaning in English?

It’s easy: the ratio between target space and search space:

2^2113 / 2^ 2944 = 2^-831. IOWs, taking -log2, 831 bits of functional information. (Thank you to drc466 for the kind correction here)

So, if we consider as a measure of our functional space a number which is certainly an extremely overestimated higher threshold for the real value, still our dFSI is over 800 bits.

Let’s go back to my initial statement:

Now, a Shakespeare sonnet is about 600 characters long. That corresponds to a search space of about 3000 bits. Now, I cannot really compute the target space for language, but I am assuming here that the number of 600 characters sequences which make good sense in english is lower than 2^2500, and therefore the functional complexity of a Shakespeare sonnet is higher than 500 bits, Dembski’s UPB. As I am aware of no simple algorithm which can generate english sonnets from single characters, I infer design. I am certain that this is not a false positive.

Was I wrong? You decide.

By the way, another important result is that if I make the same computation for a 300 character string, the dFSI value is 416 bits. That is a very clear demonstration that, in language, dFSI is bound to increase with the length of the string.

Comments
gpuccio,
I have always discussed pre-specification for years. Just check.
And:
I have always included a discussion of the Kolmogorov complexity in my detailed discussions about dFSCI. You can check.
Why are you asking people to track down your comments all over the Internet? These things matter, so include them in the description of your procedure. Show some discipline, write up a complete description of your procedure (like a scientist would), and keep it somewhere handy so that you can paste it into discussions like these. Instead, you're posting half-assed descriptions that don't make sense, and when someone points out an error, you say "Oh, I've covered that elsewhere. You can check." Show some consideration for your readers. If it matters to your procedure, cover it in the procedure description.keith s
November 12, 2014
November
11
Nov
12
12
2014
01:40 AM
1
01
40
AM
PDT
keith s: "As you well know, the “RV part” is the only part that factors into the number of bits of dFSCI. You neglect selection, which makes your number useless. KF has the same problem — see my comment above." I don't neglect selection. I discuss it separately, on its own merits. And in detail. "What’s worse, the “RV part” is a standard calculation that was understood by mathematicians long before you were born." I don't pretend that I have invented new mathematical methods. I have applied the following: a) Calculation of the number of combinationa with repetitions for n, k. b) Calculation of the number of permutations of a sequence. c) Simple algebric operations, known to all to a specific context and to specific ideas. "Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation." I am perfectly fine with that. Maybe also to try to discuss some points with some precision. But nothing really original. "And you wonder why scientists laugh at ID?" Yes. But I accept that others can have a sense of humor different from mine.gpuccio
November 12, 2014
November
11
Nov
12
12
2014
01:36 AM
1
01
36
AM
PDT
Me_think: "GP has calculated the dFSCI Shakespeare sonnet, so ‘sonnets’ in the context of this thread is Shakespeare sonnets" No. Wrong. I have taken a Shakespeare sonnet as an example, just to give a good face to the concept. but I have never specified the sonnet as "written by Shakespeare". That would be foolish. Look at the OP: " I don’t infer design because I know of Shakespeare, or because I am fascinated by the poetry (although I am). I infer design simply because this is a piece of language with perfect meaning in english (OK, ancient english)." So, being of Shakespeare has never been an issue. Even in my more restricted specifications, I referred to being in rhymed verse and then to being a sonnet in English (for which KF's definition is perfect).gpuccio
November 12, 2014
November
11
Nov
12
12
2014
01:29 AM
1
01
29
AM
PDT
gpuccio, to Me_Think:
Because the non design explanation of an observed functional configuration can be based on random variance, or necessity algorithms, or both. In any case, the RV part, either alone or in the context of a mixed algorithm, must be analyzed by dFSCI or any similar instrument, because it is based on probability.
As you well know, the "RV part" is the only part that factors into the number of bits of dFSCI. You neglect selection, which makes your number useless. KF has the same problem -- see my comment above. What's worse, the "RV part" is a standard calculation that was understood by mathematicians long before you were born. Thus, your contribution was nothing more than inventing an acronym for an old and well-known probability calculation. And you wonder why scientists laugh at ID?keith s
November 12, 2014
November
11
Nov
12
12
2014
01:24 AM
1
01
24
AM
PDT
keith s: "What gpuccio should have done in his procedure, but failed to do, was to limit the Kolomogorov complexity of the algorithms considered." I have always included a discussion of the Kolmogorov complexity in my detailed discussions about dFSCI. You can check. I have included a brief discussion in my answer to Reality at #228 (you can believe it or not, I had not yet read your post about that. I am going in order). So, please relate to that.gpuccio
November 12, 2014
November
11
Nov
12
12
2014
01:23 AM
1
01
23
AM
PDT
keith s: "Gpuccio was sloppy in not excluding this sort of algorithm, but as I said, I’m giving him a pass. He’s got bigger problems than that to deal with." Let me understand: are you saying that an algorithm can print a sequence it already knows? Amazing. This is even better than Methinks it's like a weasel. If you meant other things, please explain.gpuccio
November 12, 2014
November
11
Nov
12
12
2014
01:18 AM
1
01
18
AM
PDT
REC: I referred in that post to no errors found in the computation in the OP. I am well aware of your biological arguments. Just as a first comment to what you say: are you really suggesting that there is scarce conservation in that family? Sure, if you align 23949 you will have more variance. But then you must use at least the Durston method, with correct methodology, to detect the level of functional conservation. With three sequences I was making the simple argument that those chains are highly restrained. Are you saying that it is not true? Have you compared that result with other similar results, even with three chains, for other proteins which are much less conserved, or at all unrelated? So I ask again: are you saying that those chains are not highly conserved in that family?gpuccio
November 12, 2014
November
11
Nov
12
12
2014
01:15 AM
1
01
15
AM
PDT
kairosfocus #223, Nowhere in that logorrheic mess do you address the actual issue I raised earlier:
KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it.
Please, no more thousand-word tap dances. Address the issue.keith s
November 12, 2014
November
11
Nov
12
12
2014
01:11 AM
1
01
11
AM
PDT
keith s: "Gpuccio has taken a fatally flawed concept — CSI — and made it even worse." So, I am creative after all! :)gpuccio
November 12, 2014
November
11
Nov
12
12
2014
12:57 AM
12
12
57
AM
PDT
keith s: "(I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. He’s having enough trouble defending dFSCI as it is.)" Very generous, but not necessary. Please look at my answer to Reality at #228.gpuccio
November 12, 2014
November
11
Nov
12
12
2014
12:56 AM
12
12
56
AM
PDT
Reality at #202: Thank for your contribution, which allows me to clarify a couple of important points. You say: "All I see is you claiming that some English text that is obviously designed or already known to be designed is designed. " Ah! But this is exactly the point. a) "That is obviously designed" is correct, but the correct scientific question is: why is that obvious? And is that "obvious" reliable"? My procedure answers that question, and identifies design with 100% specificity. b) "or already known to be designed" is simply wrong. My procedure does not depend in any way from independent knowledge that the object is designed: we use independent knowledge as a confirmation in the testing of the procedure, as the "gold standard" to build the 2by2 table for the computation of specificity and sensitivity (or any other derived parameter). Please, see also this from my post #37: "Me_Think at #644: “gpuccio explained that dFSCI doesn’t detect design, only confirms if a design is real design or apparent design.” I don’t understand what you mean. dFSCI is essential to distinguish between true design and apparent design, therefore it is an essential part of scientific design detection. If you are not able to distinguish between true design and apparent design, you are making no design detection you are only making recognition of the appearance of design, which is not a scientific procedure because it has a lot of false positives and a lot of false negatives. So, just recognition of the appearance of design is not scientific design detection. On the contrary, dFSCI eliminates the false positives, and design detection becomes a scientific reality. Therefore, dFSCI is an essential part of scientific design detection." Regarding you poetry, it is rather simple. The piece obviously has no good meaning in English. Therefore, we cannot use that specification for it. It is equally obvious that it is made of correct English words. So, it is certainly part of the subset of strings which are made by English words. That is exactly the subset for which I have computed functional information in my OP. As the result was (in the Roy amended form for 500000 English words) 673 bits, we can safely exclude a random origin. So, the question is: can this result be the outcome of an algorithm? The answer is: yes, but not by any natural algorithm, and not by an algorithm simpler than the observed result. IOWs, the only possible source is a designed algorithm which is more complex than the observed sequence. Therefore the Kolmogorov Complexity of the string cannot be lowered by any algorithm. How can I say that? It's easy. Any algorithm which build sentences made by correct English words must use as an oracle at least a dictionary of English words. Which, in itself, is more complex than the poem you presented. Moreover, we can certainly make a further specification (DNA_Jock: is that again painting new targets which were not there before? Is that making the probability arbitrarily small?). Why? Because the poem has an acceptable structure in non rhymed verses. That would certainly increase the functional complexity of the string, but also the algorithm would be more complex (maybe with some advantage for the algorithm here, because after all the verse length is very easy to check algorithmically). However, the algorithm is always much more complex than the observed result, because at least of the oracle it needs. So, the conclusion is easy: the poem is certainly designed, either directly or through a designed algorithm. Many of these objections arise from the simple fact that you always ignore one of the basic points of my procedure, indeed the first step of it. See my post #15: "a) I observe an object, which has its origin in a system and in a certain time span." So, in the end, the question about the algorithms can be formulated as follows: "Are we aware of any explicit algorithm which can explain the functional configuration we observe, and which could be available in the system and the time span?" So, if your system includes a complex designed algorithm to generate strings made by English words by a Dictionary oracle, then the result we observe can be explained without any further input of functional information by a conscious designer. I hope that is clear.gpuccio
November 12, 2014
November
11
Nov
12
12
2014
12:53 AM
12
12
53
AM
PDT
Me_Think: "I don’t think Nature cares if a person says it is restricted to do something because he has a flawed algorithm that restricts it from doing something." Now that we know that you are the Oracle for Nature, why bother making science? Stay available, please.gpuccio
November 12, 2014
November
11
Nov
12
12
2014
12:21 AM
12
12
21
AM
PDT
keith s: "Sounds like a new rule that you didn’t include in your original procedure. It’s a longstanding bad habit of yours to keep changing your argument in the middle of discussion without acknowledging that you are changing it. Also, drc466 need not use the sequence to specify itself. He can prespecify the target as “the winning numbers for this lottery, whatever they turn out to be.” You know the size of the target, and you know the size of the search space. The ratio is tiny. You’ll get a false positive." :) You really try all that you can, don't you? I have always discussed pre-specification for years. Just check. It's not important to me, because I never use it in any useful context.I have always said that it perfectly legitimate to use a sequence to specify itself, but only as a pre-specification. It expresses the probability of finding that specific sequence again, and the target space is 1. But the sequence bears no functional information, except for the fact that it is in your hands and you can look at its bits. Even a child would understand that. Obviously, "the winning numbers for this lottery, whatever they turn out to be", is a correct specification. It can be used as a post-specification. And it has zero functional complexity: any number is in the target space, because all numbers, if extracted, will be the winning number. Being extracted is in no way connected to the information in the sequence, unless the lottery is fixed. In this case, indeed, a pre-specification of the result which is extremely improbable is a clear sign to infer that the lottery is fixed, that any judge would accept. Unless you believe in lottery precognition (which would have its advantages). False positive? Bah! Do you even think for a moment before posting?gpuccio
November 12, 2014
November
11
Nov
12
12
2014
12:19 AM
12
12
19
AM
PDT
Me_Think: "If you also have to check for algorithms which can write sonnets, why bother with dFSCI calculations? You could see if you are aware of algorithms which can produce sonnet or whatever you are examining and if there are none, you can infer design. Why calculate dFSCI?" Because the non design explanation of an observed functional configuration can be based on random variance, or necessity algorithms, or both. In any case, the RV part, either alone or in the context of a mixed algorithm, must be analyzed by dFSCI or any similar instrument, because it is based on probability. That should be very easy to understand. I am really amazed at the insistence with which you and others worry about my "bothering". I understand it's for my sake, but please, relax! :)gpuccio
November 12, 2014
November
11
Nov
12
12
2014
12:07 AM
12
12
07
AM
PDT
MT: Someone posted above what is obviously not a Sonnet. KFkairosfocus
November 11, 2014
November
11
Nov
11
11
2014
10:32 PM
10
10
32
PM
PDT
KS: For record -- at this stage, with all due respect but in hope of waking you up -- I no longer expect you to be responsive to mere facts or reasoning, as I soon came to see committed Marxists based on their behaviour, back in student days. In that context the self-stultifying circles in your retorts to self-evident first principles of right reason, I find to be diagnostic and of concern. Now, on your attempted talking points of deflection and dismissal of the FSCO/I metric model I developed (as opposed to showed as a theorem derived in the Geometric QED sense) from Dembski's one, in the context of discussions involving VJT, Paul Giem and myself in response to P May wearing the persona Mathgrrl: Chi_500 = I*S - 500, functionally specific bits beyond the Solar System threshold of complexity Just for reference, in Sect A my always linked note you will see a different metric model that goes directly to FSCI values by using info values and two multiplied dummy variables, one for specificity and one for complexity beyond a relevant threshold. That too does the same job, but does not underscore the point that the Dembski model is an info beyond a threshold model. Which replies implicitly to a raft of dismissive critiques. Perhaps, you are unaware of my Electronics background, which is famous for the many models of Transistor and Amplifier action that can do the same job from diverse perspectives. And my favourite is to take an h parameter model and simplify, to where we have hie driving a dependent perfect current source with an internal load and an external one, both shunted to signal ground through the power supply. Weird and mystifying at the first, but very effective until parasitic capacitances have to come into play, whereupon, go for a simplified hybrid pi, until you reach points where wires need to be modelled, and you need to turn everything into waveguides. At which point, go get yourself some heavy duty computational simulations. (Of course, nowadays, we have SPICE fever, with 40+ variable transistor models to cloud the issue. If that sounds like the problems with Economics, you betcha! For me, if it is useful to take Solow, modify with a Human Capital model and spot linking relationships that speak to real world policy challenges, that has done its day's work. As in, tech multiplies labour but depends on lagged investments in human capital to bring a work force to a point of being responsive to the tech, in an era where the 9th grade edu that drove the Asian Miracle is not good enough anymore. Then, we see the investment challenge faced by the would be investor, Hayek's long tail of the temporal/phase structure of investment, malinvestment (perhaps policy induced), instability amplification and roots in a community envt. Which, hath in it the natural, socio-cultural and economic. Thence, interface sectors, on natural resources & hazards and their management, brains as natural resource thus health-edu-welfare issues, and culture of governance vs government institutions and policy making all supporting the requisite pool of effective talent. No need to create a vast body of elaborate pretended Geometric proofs on sets of perfect axioms, reasonable, empirically relevant and supported is good enough for back of the Com 10 envelope Gov't work, what really rules the world. I trust you can catch the philosophy of modelling just outlined. Models were made for man, and not man for models. And don't fool yourselves that just because you can come up with dismissive objections you can go back to your favourite un-examined models that sit comfortably with your preferred worldview. In reality we all are going to be tickling a dragon's tail in any case and should know enough to do so with fear and trembling. And yes the echo of Feynmann's phrase is intended.) The proper judgement of a model is, effectiveness, which is in the end an inductive logic exercise. And so models can be mixed, matched and worked with. Take the Dembski 2005 metric model, carry out the logging operation on its three components, apply the associative rule and see that we have two constants that may be summed to form a threshold value. Note the standard metric of information, as a log metric. Then, note that on reasonable analysis, subsystems of the cosmos may be viewed as dynamic stochastic processes that carry out in effect real world Monte Carlo runs that will explore realistic (as opposed to far-fetched) possibilities . . . think about Gibbs' ensemble of similar systems. It is reasonable to derive a metric model of functionally specific info beyond a threshold, and test it against the base of observable cases. Similarly, to analyse using config space concepts and sampling, by randomness [broadly considered] including dynamic-stochastic processes including in effect random walks with drift (cf. a body of air being blown along, with the air molecules still having a stochastic distribution of molecular velocities and a defined temperature). Notice relevant utter sparseness of possible sampling, whether scattershot or random walks from arbitrary initial conditions makes but little difference. Compare to 10^57 atoms of our solar system considered as observers of trays of 500 coins each. Flip-observe 10^14 times per second for 10^17 s, observe the comparison of a straw to a cubical haystack comparably thick as our galaxy as sample to possibilities. The samples by flipping can be set up to move short Hamming distance random walk hops as you please, it makes no material difference. The point is, by its nature functionally specific, complex organisation and associated information (FSCO/I) sharply constrains effective configs relative to clumped at random or scattered at random possibilities, and is maximally implausible to be found on a blind watchmaker search. Also, the great Darwinist hope of feedback improvement from increasing success presumes starting on an island of function, i.e. it begs the material question. Where, too, FSCO/I is quite recognisable and observable antecedent to any metric models, as happened historically, with Orgel and Wicken. It surrounds us in a world of technology. Consistently, it is observed to be caused by design, by intelligently directed configuration. Trillions of cases in point. Per induction and the vera causa principle, the best current explanation of FSCO/I whether Shakespeare's Sonnets or posts in this thread or ABU 6500 3c Mag reels (there is a whole family of related reels in an island of function above and beyond the effect of good old tolerances), or source or object computer code. Going beyond to explain cases where we did not and cannot observe the actual deep past cause, also of D/RNA and the Ribosome system of protein synthesis that uses mRNA as a control tape. Which is where the root of objections lies. We all routinely recognise FSCO/I and infer design as cause, cf posts in this thread where we generally have no independent, before the fact basis to know they are not lucky noise on the net. After all noise can logically possibly mimic anything. (See the selective hyperskepticism/ hypercredulity problem your argument faces?) Just, when the same vera causa inductive logic and like causes like uniformity reasoning cuts across the dominant, lab coat clad evolutionary materialism and its fellow travellers, with their implausibilities that must be taken without question (or else you are "anti-Science" or you are no true Scotsman . . . ), all the hyperskepticism you wish to see gets trotted out. Because as Mom used to say to a very young KF, a man convinced against his will is of the same opinion still. I say this, to ask you to pause and think again. KFkairosfocus
November 11, 2014
November
11
Nov
11
11
2014
10:31 PM
10
10
31
PM
PDT
KF GP has calculated the dFSCI Shakespeare sonnet, so 'sonnets' in the context of this theard is Shakespeare sonnetsMe_Think
November 11, 2014
November
11
Nov
11
11
2014
10:05 PM
10
10
05
PM
PDT
F/N: COllins English Dict: >> sonnet (?s?n?t) prosody n 1. (Poetry) a verse form of Italian origin consisting of 14 lines in iambic pentameter with rhymes arranged according to a fixed scheme, usually divided either into octave and sestet or, in the English form, into three quatrains and a couplet >> KFkairosfocus
November 11, 2014
November
11
Nov
11
11
2014
09:24 PM
9
09
24
PM
PDT
StephenB, fifthmonarchyman I could calculate the Entropy of sonnets and claim the derived value proves sonnets are designed because I dont see any sonnet algorithms in nature. How is that different from dFSCI calculation? gp calculates AND checks there are no natural algorithm and then concludes sonnets are designed. Where is the need to ccalculate any thing at all ?Me_Think
November 11, 2014
November
11
Nov
11
11
2014
09:15 PM
9
09
15
PM
PDT
FMM, Apparently you are unfamiliar with the concept of Kolmogorov complexity. What gpuccio should have done in his procedure, but failed to do, was to limit the Kolomogorov complexity of the algorithms considered.keith s
November 11, 2014
November
11
Nov
11
11
2014
08:35 PM
8
08
35
PM
PDT
keith's do you honestly think that (print out)=(Produce) or are you just trying to blow smoke for the hell of it? Never mind I know the answer. I agree with StephenB Darwinists can be fun. peacefifthmonarchyman
November 11, 2014
November
11
Nov
11
11
2014
08:27 PM
8
08
27
PM
PDT
StephenB, I'm sure it feels good to pretend that ID critics are "whacked out", but doesn't it create some cognitive dissonance for you, since in reality the critics don't conform to your caricature? I am quite clear on what can and cannot be calculated, and what the problems are with each of CSI, FSCO/I, and dFSCI:
Dembski’s problems are that 1) he can’t calculate P(T|H), because H encompasses “Darwinian and other material mechanisms”; and 2) his argument would be circular even if he could calculate it. KF’s problem is that although he claims to be using Dembski’s P(T|H), he actually isn’t, because he isn’t taking Darwinian and other material mechanisms into account. It’s painfully obvious in this thread, in which Elizabeth Liddle and I press KF on this problem and he squirms to avoid it. Gpuccio avoids KF’s problem by explicitly leaving Darwinian mechanisms out of the numerical calculation. However, that makes his numerical dFSCI value useless, as I explained above. And gpuccio’s dFSCI has a boolean component that does depend on the probability that a sequence or structure can be explained by “Darwinian and other material mechanisms”, so his argument is circular, like Dembski’s. All three concepts are fatally flawed and cannot be used to detect design.
keith s
November 11, 2014
November
11
Nov
11
11
2014
08:25 PM
8
08
25
PM
PDT
"GP computes dFSCI for the English language" He offered the previously "calculated" example of ATP synthase. ...and how does the fias/co of the English language go...it is specified in the dictionary, and makes sense to us...so intelligence? See above. Is the dFSCI/o of ATP synthase=0?REC
November 11, 2014
November
11
Nov
11
11
2014
08:23 PM
8
08
23
PM
PDT
This has been an interesting post. GP computes dFSCI for the English language and his critics cry out, "Yes, but what is it good for?" I am eagerly awaiting his next post, which will likely explain what FSCI is good for, at which time his critics will cry out, "Yes, but can you compute it?" Darwinists are fun--maybe a little whacked out--but fun.StephenB
November 11, 2014
November
11
Nov
11
11
2014
08:13 PM
8
08
13
PM
PDT
FMM, An arbitrary finite sequence e[0], e[1], e[2], ..., e[n] can be printed by this obvious algorithm:
for i = 0 to n   print e[i]
Gpuccio was sloppy in not excluding this sort of algorithm, but as I said, I'm giving him a pass. He's got bigger problems than that to deal with.keith s
November 11, 2014
November
11
Nov
11
11
2014
08:07 PM
8
08
07
PM
PDT
"An attempt at computing dFSCI for English language" Yes, yes....we're all concerned with objectively demonstrating Shakespeare has intelligence and nothing else.REC
November 11, 2014
November
11
Nov
11
11
2014
07:49 PM
7
07
49
PM
PDT
REC Surely you realize the title of this thread is An attempt at computing dFSCI for English language and not An attempt at computing dFSCI for ATP synthase sequences peacefifthmonarchyman
November 11, 2014
November
11
Nov
11
11
2014
07:35 PM
7
07
35
PM
PDT
keith's said, I’m giving gpuccio a pass on the fact that there is always an algorithm that can produce any finite sequence, regardless of what it is. I say, check it out http://en.wikipedia.org/wiki/Computable_number and http://arxiv.org/abs/1405.0126 peacefifthmonarchyman
November 11, 2014
November
11
Nov
11
11
2014
07:30 PM
7
07
30
PM
PDT
Me_Think says How does checking an algorithm’s availability help you decide if sonnet or proteins is amenable to CSI/dFSCI computation? I say, We are looking for false positives. The harder it is to produce a false positive in an easy test like "good English" text string the more confident we can be that false positives are beyond the reach of algorithms in more difficult cases. peacefifthmonarchyman
November 11, 2014
November
11
Nov
11
11
2014
07:24 PM
7
07
24
PM
PDT
" seems that nobody has found any real error in it " I think the summary of errors is as follows (besides that a random search of all sequence space is not the evolutionary hypothesis). 1)Conservation does not correlate with the percentage of sequence space that is functional. (see Rubisco example--all plants, poor enzyme, human design circumventing local optima). ID simply invokes this contra empirical data. 2)You specify the specification (sequence conservation) that you state correlates with function, while considering functional specification...what a cluster. When presented with alternatives in sequence space (and not just any way of making ATP synthase--a proton transporting membrane bound rotary synthase) of little homology, you declare them an independent design! Isn't the point what percent of sequence space is functional, and would be found in a search? 3) Granting your own methodology, you cheat at it. You selected three related sequences for an ATP synthase subunit and aligned them, then declared shared residues necessary. I repeated the process with all 23949 F1-alpha ATP synthase sequences. No F1-alternates. No V-ATPases or N- or other odd ones that can perform the same function. 100% conserved residues: 0 So using your method, no CSI???? hmmm..... maybe the database is off....few oddballs. 98% conserved.....12 residues. (and there are some clear substitutions in otherwise aligned sequences). So maybe next time, try more than.01% of known sequences in defining function/conservation in sequence space. Try it yourself: http://www.ebi.ac.uk/interpro/entry/IPR005294/proteins-matched?start=580 http://mobyle.pasteur.fr/REC
November 11, 2014
November
11
Nov
11
11
2014
07:03 PM
7
07
03
PM
PDT
1 22 23 24 25 26 31

Leave a Reply